Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
      • AI Memory
      • Trusted Open Source
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
      • Reference Evaluation
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
      • Article Quality Science
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
      • Reference Trust Analyzer
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
      • Geopolitical Stability Dashboard
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
    • Contributors
  • Contact
  • Join Community
  • Terms of Service
  • Login
  • Register
Menu

Attention Memory Patterns — What Models Actually Store in KV-Cache

Posted on March 19, 2026 by
AI MemoryTechnical Research · Article 2 of 29
By Oleh Ivchenko
Academic Citation: Ivchenko, Oleh (2026). Attention Memory Patterns — What Models Actually Store in KV-Cache. Research article: Attention Memory Patterns — What Models Actually Store in KV-Cache. Odessa National Polytechnic University, Department of Economic Cybernetics.
DOI: 10.5281/zenodo.19116558[1]  ·  View on Zenodo (CERN)
DOI: 10.5281/zenodo.19116558[1]Zenodo ArchiveORCID
2,736 words · 39% fresh refs · 3 diagrams · 19 references

70stabilfr·wdophcgmx
BadgeMetricValueStatusDescription
[s]Reviewed Sources11%○≥80% from editorially reviewed sources
[t]Trusted100%✓≥80% from verified, high-quality sources
[a]DOI95%✓≥80% have a Digital Object Identifier
[b]CrossRef11%○≥80% indexed in CrossRef
[i]Indexed100%✓≥80% have metadata indexed
[l]Academic11%○≥80% from journals/conferences/preprints
[f]Free Access11%○≥80% are freely accessible
[r]References19 refs✓Minimum 10 references required
[w]Words [REQ]2,736✓Minimum 2,000 words for a full research article. Current: 2,736
[d]DOI [REQ]✓✓Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.19116558
[o]ORCID [REQ]✓✓Author ORCID verified for academic identity
[p]Peer Reviewed [REQ]—✗Peer reviewed by an assigned reviewer
[h]Freshness [REQ]39%✗≥80% of references from 2025–2026. Current: 39%
[c]Data Charts0○Original data charts from reproducible analysis (min 2). Current: 0
[g]Code—○Source code available on GitHub
[m]Diagrams3✓Mermaid architecture/flow diagrams. Current: 3
[x]Cited by0○Referenced by 0 other hub article(s)
Score = Ref Trust (82 × 60%) + Required (3/5 × 30%) + Optional (1/4 × 10%)

Abstract #

The key-value (KV) cache is the operational memory of transformer-based large language models (LLMs), storing intermediate attention representations that grow linearly with sequence length and quadratically impact computational cost. Yet what exactly do models store in these key and value vectors, and how uniformly is this information distributed across heads and layers? This article presents a systematic analysis of attention memory patterns in modern LLMs, examining head specialization phenomena — where individual attention heads develop distinct functional roles as positional trackers, syntactic parsers, or semantic integrators. We investigate the attention sink phenomenon, wherein models concentrate disproportionate attention on initial tokens regardless of semantic content, and analyze information density gradients across transformer layers. Drawing on recent empirical measurements of KV-cache utilization rates and redundancy patterns, we establish that 40–70% of cached key-value pairs carry minimal information for downstream generation. These findings directly inform cache eviction and compression strategies, demonstrating that attention-aware pruning can reduce cache memory by 2.7–5.7x with near-lossless accuracy. The implications extend to practical deployment: understanding what models actually store in their attention memory is prerequisite to building systems that manage this memory efficiently.

1. Introduction #

Every token generated by a transformer-based LLM requires consulting an accumulated history of key-value pairs — the KV-cache — that encodes how previous tokens should influence the current prediction. As context windows expand to 128K tokens and beyond, this cache becomes the dominant memory bottleneck in inference systems, consuming tens of gigabytes per concurrent request ([1][2]). The challenge is not merely one of storage capacity but of understanding what information this cache actually preserves and whether all of it is necessary.

The transformer attention mechanism projects each token’s hidden state into query, key, and value vectors through learned linear transformations. Keys encode what information a token offers for matching; values encode what information it contributes when matched. During generation, the current token’s query vector is compared against all cached keys to produce attention weights, which then aggregate cached values into the output representation ([2]). This mechanism is elegant but profligate: it stores every token’s projections across every layer and every head, regardless of whether that information will ever be retrieved.

Recent work has revealed that this apparent wastefulness masks significant internal structure. Attention heads specialize into distinct functional categories — some track positional relationships, others parse syntactic dependencies, and still others integrate semantic meaning ([3][3]). Information density varies dramatically across layers, with early layers capturing local token interactions and later layers encoding abstract semantic relationships ([4][4]). Understanding these patterns is essential for any approach to KV-cache optimization, because compressing uniformly what is structured non-uniformly inevitably destroys critical information while preserving redundant data.

This article examines what transformers actually store in their attention memory, how attention patterns reveal exploitable redundancy, and what these findings mean for practical cache management strategies.

2. What Key and Value Vectors Encode #

The functional distinction between keys and values is architecturally simple but informationally complex. Key vectors serve as addressing mechanisms — they determine which tokens will be attended to by encoding features that queries can match against. Value vectors serve as content carriers — they determine what information flows forward when a token is attended to. This separation creates an asymmetry that has direct implications for cache compression.

flowchart TD
    subgraph Input_Processing
        H[Hidden State h_t] --> WK[Key Projection W_K]
        H --> WV[Value Projection W_V]
        H --> WQ[Query Projection W_Q]
    end
    WK --> K[Key Vector k_t]
    WV --> V[Value Vector v_t]
    WQ --> Q[Query Vector q_t]
    subgraph Cache_Storage
        K --> KC[Key Cache]
        V --> VC[Value Cache]
    end
    Q --> ATT[Attention Score Computation]
    KC --> ATT
    ATT --> W[Attention Weights]
    W --> AGG[Weighted Value Aggregation]
    VC --> AGG
    AGG --> OUT[Output Representation]

Empirical analysis of key vector distributions reveals that they encode a mixture of positional and content-based features. In models using rotary position embeddings (RoPE), the positional information is injected directly into key vectors through rotation matrices, creating a geometric structure where nearby tokens have similar key orientations ([5][5]). This means that a substantial portion of the key vector’s capacity is dedicated to encoding where a token appears rather than what it contains.

Value vectors, by contrast, show markedly different distributional properties. Recent quantization studies demonstrate that value vectors exhibit higher variance and more outlier dimensions than key vectors, making them more sensitive to precision reduction ([6][6]). The KVC-Q framework found that dynamic quantization must allocate different bit-widths to keys versus values to maintain generation quality — keys tolerate 2-bit quantization across most heads, while values require 4-bit precision in semantically critical layers. This asymmetry reflects the different information densities: keys carry structured, low-entropy positional patterns; values carry high-entropy semantic content.

The DiffKV system exploits this distinction through differentiated memory management, applying aggressive compression to key caches while preserving value caches at higher fidelity ([7]). Their measurements show that key vectors across adjacent layers exhibit cosine similarities exceeding 0.95, indicating massive cross-layer redundancy in the addressing mechanism, while value vectors show substantially lower inter-layer correlation (0.6–0.8), confirming that each layer contributes distinct semantic content through its values.

3. Attention Head Specialization #

Not all attention heads are created equal. Through training, individual heads develop specialized functional roles that determine what information they extract from the KV-cache and how they contribute to the model’s overall computation. Understanding this specialization is critical because it reveals which heads — and therefore which cached key-value pairs — carry essential versus redundant information.

Research on attention head intervention in transformers identifies three primary categories of head specialization ([3][3]). Positional heads attend primarily based on token distance, implementing patterns such as “attend to the previous token” or “attend to tokens within a fixed window.” These heads are prevalent in early layers and encode local context — they effectively implement n-gram-like features within the attention framework. Syntactic heads track grammatical relationships, exhibiting attention patterns that align with dependency parse trees — attending from verbs to their subjects, from pronouns to their antecedents, or from modifiers to the words they modify. Semantic heads, concentrated in later layers, attend based on meaning similarity and topical relevance rather than structural relationships.

The Q Cache analysis of multimodal LLMs provides quantitative evidence that this specialization creates dramatic differences in head importance ([4][4]). Their measurements show that visual attention is valuable in less than half of the decode layers, meaning that entire layers’ worth of KV-cache entries contribute negligibly to generation quality for visual tokens. This finding generalizes: across modalities, certain heads function as information bottlenecks while others serve as redundant backup pathways.

Grouped query attention (GQA) architectures exploit head redundancy structurally by sharing key-value pairs across multiple query heads ([8]). Graph-based query clustering reveals that attention heads within the same group naturally converge on similar attention patterns during training, confirming that the redundancy is not an artifact but a fundamental property of how transformers allocate their representational capacity. The implication for cache management is direct: if multiple heads attend to the same tokens with similar weights, storing distinct copies of their key-value pairs wastes memory proportional to the redundancy factor.

4. The Attention Sink Phenomenon #

One of the most counterintuitive patterns in transformer attention is the attention sink — the consistent allocation of disproportionately high attention weights to the first token in a sequence, regardless of its semantic content. A period, a space, or a begin-of-sequence token all receive substantially more attention than their informational contribution warrants.

flowchart LR
    subgraph Layer_Behavior
        direction TB
        L1[Layer 1-4: Strong Sink]
        L2[Layer 5-12: Moderate Sink]
        L3[Layer 13-24: Weak Sink]
        L4[Layer 25-32: Semantic Override]
    end
    subgraph Attention_Distribution
        direction TB
        T1[Token 1: 15-40% attention mass]
        T2[Token 2-5: 5-10% each]
        T3[Token 6-N: Normal distribution]
    end
    L1 --> T1
    L2 --> T1
    L3 --> T3
    L4 --> T3

Empirical investigation of when and why attention sinks emerge reveals that this phenomenon is not a training artifact but a functional mechanism ([9]). The attention sink serves as a “no-op” target — when an attention head has no relevant information to retrieve for a given query, it distributes weight to the sink token rather than spreading it across irrelevant tokens. This prevents the injection of noise into the value aggregation. Early feed-forward network layers amplify the hidden state norm of the first token, creating a stable attractor in attention space that functions as a default routing destination.

The practical consequence for KV-cache management is that the first few tokens must always be retained in cache, even under aggressive eviction policies. The StreamingLLM framework formalized this requirement, demonstrating that retaining just 4 sink tokens alongside a sliding window of recent tokens enables stable generation over arbitrarily long sequences. Without the sink tokens, perplexity degrades catastrophically — not because they contain useful information, but because they provide the attention mechanism with a safe default target.

Hardware-level analysis at ISSCC 2026 demonstrated that attention sink patterns create predictable memory access patterns that can be exploited by specialized accelerator architectures ([10][7]). The Tri-Oracle accelerator uses token-attention-weight redundancy prediction to reduce energy consumption to 17.78 microjoules per token, partly by recognizing that sink token accesses are invariant across generation steps and can be cached in register files rather than fetched from main memory.

5. Information Density Across Layers #

Transformer layers do not contribute equally to the model’s output. Information density — measured as the mutual information between a layer’s attention patterns and the final prediction — follows a characteristic gradient that has been empirically validated across model families and scales.

flowchart TB
    subgraph Early_Layers_1_to_8
        E1[Local token interactions]
        E2[Positional encoding integration]
        E3[Character and subword patterns]
    end
    subgraph Middle_Layers_9_to_20
        M1[Syntactic dependency tracking]
        M2[Phrase structure recognition]
        M3[Cross-clause reference resolution]
    end
    subgraph Late_Layers_21_to_32
        S1[Semantic composition]
        S2[Task-specific reasoning]
        S3[Output vocabulary projection]
    end
    Early_Layers_1_to_8 --> Middle_Layers_9_to_20
    Middle_Layers_9_to_20 --> Late_Layers_21_to_32
    subgraph Cache_Compressibility
        C1[High: 70-80% prunable]
        C2[Medium: 40-60% prunable]
        C3[Low: 20-30% prunable]
    end
    Early_Layers_1_to_8 -.-> C1
    Middle_Layers_9_to_20 -.-> C2
    Late_Layers_21_to_32 -.-> C3

Early layers (1–8 in a 32-layer model) primarily process local context. Their attention patterns are dominated by positional heads that attend to nearby tokens, implementing something analogous to convolutional feature extraction. The KV-cache entries for these layers exhibit high redundancy — entropy-guided analysis shows that early-layer attention distributions have low entropy, meaning they attend to predictable, narrow sets of tokens ([11][8]). This makes early-layer caches highly compressible: up to 70–80% of entries can be pruned without measurable quality degradation.

Middle layers (9–20) represent a transition zone where syntactic processing dominates. These layers track grammatical dependencies across clause boundaries, resolve co-references, and build phrase-level representations. Their cache entries show moderate redundancy — some heads focus on stereotyped syntactic patterns (subject-verb agreement, relative clause attachment) that are compressible, while others track idiosyncratic dependencies that must be preserved.

Late layers (21–32) perform semantic composition and task-specific reasoning. Their attention patterns are the least redundant and most query-dependent — they attend to tokens based on meaning rather than position or syntax, making their attention distributions high-entropy and difficult to predict. The EMPIRIC framework’s measurements confirm that late-layer cache eviction causes disproportionate quality degradation: removing 50% of entries from the final 8 layers increases perplexity by 12–18%, compared to less than 2% for the same reduction in early layers ([9]).

Multi-tier dynamic storage systems exploit this gradient by placing late-layer caches in fast GPU HBM memory while offloading early-layer caches to slower but larger CPU DRAM or NVMe storage ([12][9]). Under resource-constrained edge deployment conditions, this tiered approach achieves 2–3x effective cache capacity without proportional hardware cost increases.

6. Redundancy Patterns and Cache Eviction Strategies #

The non-uniform distribution of information across heads and layers creates exploitable redundancy patterns that inform intelligent cache eviction. Three principal axes of redundancy have been identified: temporal (across sequence positions), cross-head (across attention heads within a layer), and cross-layer (across the depth of the network).

Temporal redundancy arises because not all cached tokens remain relevant as generation proceeds. The sparse attention framework for multiple-context KV-cache demonstrates that attention scores follow a power-law distribution — a small fraction of cached tokens receive the majority of attention weight at any given generation step ([13]). Tokens that have not been significantly attended to in recent generation steps are unlikely to be attended to in future steps, enabling recency-weighted eviction policies that retain high-attention tokens while discarding stale entries.

Cross-head redundancy is substantial: within a single layer, multiple heads often attend to overlapping token sets with correlated weight distributions. The Kelle hardware-software co-design quantifies this redundancy through eDRAM-based caching that stores shared attention patterns once rather than per-head ([14]). Their systolic evictor architecture avoids stalling LLM execution during cache eviction, demonstrating that redundancy-aware eviction can be implemented at hardware speed without pipeline bubbles.

Cross-layer redundancy, as noted in Section 2, is most pronounced in key vectors. The cross-layer attention approach reduces cache size by sharing key-value pairs across adjacent layers, exploiting the observation that layer N and layer N+1 often compute nearly identical key projections for the same token. The efficient multimodal LLM inference framework combines dynamic quantization with cross-layer sharing to achieve 4x cache reduction with less than 1% accuracy degradation on standard benchmarks ([15]).

Hybrid quantization approaches represent a complementary strategy to eviction. The Oaken system combines online calibration during prefill with offline quantization profiles to adaptively compress KV-cache entries based on their measured importance ([16]). Keys and values are quantized to different bit-widths per head per layer, with the quantization schedule determined by attention pattern analysis during the prefill phase. This approach achieves near-lossless quality at 2.5 average bits per element, compared to the standard 16-bit representation — a 6.4x compression ratio.

The IEEE Computer Architecture Letters analysis of KV-cache quantization in multimodal models confirms that optimal quantization strategies must be attention-pattern-aware ([17][10]). Heads with high attention entropy (diverse, distributed attention patterns) require higher precision; heads with low entropy (concentrated, predictable patterns) tolerate aggressive quantization. This directly connects back to head specialization: positional heads with predictable patterns compress well; semantic heads with varied patterns require preservation.

7. Implications for System Design #

The empirical findings on attention memory patterns converge on a clear architectural principle: KV-cache management should be heterogeneous, adapting compression and eviction strategies to the information density of each head, layer, and sequence position. Uniform approaches — applying the same quantization, the same eviction threshold, the same memory tier to all cache entries — leave substantial efficiency gains unrealized.

On-device LLM deployment makes this principle urgent. Feasibility studies of on-device inference show that KV-cache memory is the primary constraint limiting context length on mobile and edge devices, where total available memory may be 4–8 GB shared between model weights and cache ([1][2]). Attention-aware cache management can extend effective context length by 3–5x on the same hardware, transforming deployment that was previously infeasible into practical operation.

The path forward requires tighter integration between attention pattern analysis and cache management policies. Current systems typically analyze attention patterns post-hoc and apply static compression schedules. Future architectures should embed attention pattern monitoring into the inference pipeline itself, enabling real-time adaptation of cache management to the specific characteristics of each input sequence and generation trajectory.

8. Conclusion #

Transformer attention memory is not a homogeneous store of equally important information. It is a structured, layered system where positional heads in early layers encode predictable local patterns with high redundancy, syntactic heads in middle layers track grammatical relationships with moderate redundancy, and semantic heads in late layers compose meaning with minimal redundancy. The attention sink phenomenon adds a further structural element — initial tokens serve as functional no-op targets that must be preserved regardless of eviction policy.

These patterns are not merely academic observations. They directly determine the compression ratios achievable by cache eviction and quantization systems: 2.7–5.7x for differentiated key-value management, up to 6.4x for attention-aware hybrid quantization, and 3–5x effective context extension for on-device deployment. The gap between uniform and attention-aware cache management represents one of the most significant practical optimization opportunities in current LLM inference systems.

Understanding what models store in their KV-cache — and what they do not need to store — transforms cache management from a systems engineering problem into a scientific question about how transformers organize and retrieve information. The answer to that question, as the evidence presented here demonstrates, is that transformers are far more structured in their memory usage than their architectures would suggest, and that structure is the key to efficient deployment.

References (10) #

  1. Stabilarity Research Hub. Attention Memory Patterns — What Models Actually Store in KV-Cache. doi.org. dti
  2. Just a moment…. doi.org. dti
  3. Demystifying Transformer: A Case Study on Graph-Based Interpretability and Attention Layer Dynamics | Springer Nature Link. doi.org. dti
  4. 403 Forbidden. doi.org. dti
  5. (2026). MDLA: Multi-Head Latent Linear Attention for Low-Complexity and Memory-Efficient KV Cache | IEEE Conference Publication | IEEE Xplore. doi.org. dti
  6. (2026). Redirecting. doi.org. dti
  7. (2026). Tri-Oracle: A 17.78μJ/Token Vision-Language Model Accelerator with Token-Attention-Weight Redundancy Prediction | IEEE Conference Publication | IEEE Xplore. doi.org. dti
  8. Kim, Heekyum; Jung, Yuchul. (2025). Entropy-Guided KV Caching for Efficient LLM Inference. doi.org. dcrtil
  9. Wang, Junliang; Hu, Jiaqi; Cao, Qingping; Zhu, Yuanrui; Lin, Xiancheng. (2026). Multi-tier dynamic storage of KV cache for LLM inference under resource-constrained conditions. doi.org. dcrtil
  10. (2025). Exploring KV Cache Quantization in Multimodal Large Language Model Inference | IEEE Journals & Magazine | IEEE Xplore. doi.org. dti
← Previous
KV-Cache Fundamentals — How Transformers Remember (and Forget)
Next →
Context Window Utilization — How Much of the Window Do Models Really Use?
All AI Memory articles (29)2 / 29
Version History · 1 revisions
+
RevDateStatusActionBySize
v0Mar 19, 2026CURRENTFirst publishedAuthor20921 (+20921)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • Comparative Benchmarking: HPF-P vs Traditional Portfolio Methods
  • The Future of Intelligence Measurement: A 10-Year Projection
  • All-You-Can-Eat Agentic AI: The Economics of Unlimited Licensing in an Era of Non-Deterministic Costs
  • The Future of AI Memory — From Fixed Windows to Persistent State
  • FLAI & GROMUS Mathematical Glossary: Complete Variable Reference for Social Media Trend Prediction Models

Research Index

Browse all articles — filter by score, badges, views, series →

Categories

  • ai
  • AI Economics
  • AI Memory
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Article Quality Science
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • ScanLab
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Trusted Open Source
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.