NAND Flash Architecture and Specification Trends - Micron
NAND Flash Architecture and Specification Trends - Micron
NAND Flash Architecture and Specification Trends - Micron
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Santa Clara, CA<br />
August 2012<br />
<strong>NAND</strong> <strong>Flash</strong> <strong>Architecture</strong> <strong>and</strong><br />
<strong>Specification</strong> <strong>Trends</strong><br />
Michael Abraham (mabraham@micron.com)<br />
<strong>NAND</strong> Solutions Group Architect<br />
<strong>Micron</strong> Technology, Inc.<br />
1
Abstract<br />
• <strong>NAND</strong> <strong>Flash</strong> is quickly moving to sub-20nm lithographies,<br />
making it the fastest scaling semiconductor technology<br />
ever!<br />
• What impact do these shrinks have to <strong>NAND</strong>’s<br />
architecture, performance, <strong>and</strong> reliability in system<br />
solutions?<br />
• Learn how to prepare for these changes <strong>and</strong> counteract<br />
some of them through improved system design.<br />
• Also, take a look at innovative <strong>NAND</strong> technologies that<br />
improve performance <strong>and</strong> reliability.<br />
Santa Clara, CA<br />
August 2012<br />
2
Topics<br />
• <strong>NAND</strong> <strong>Flash</strong> <strong>Architecture</strong> <strong>Trends</strong><br />
• The Cloud <strong>and</strong> Clients<br />
• Enterprise Application Requirements<br />
• ECC <strong>and</strong> SSD Topologies<br />
Santa Clara, CA<br />
August 2012<br />
3
Santa Clara, CA<br />
August 2012<br />
Process Node (nm)<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
<strong>NAND</strong> Process Migration:<br />
Shrinking Faster than Moore’s Law<br />
30nm Class<br />
Q1-07<br />
Q2-07<br />
Q3-07<br />
Q4-07<br />
Q1-08<br />
Q2-08<br />
Q3-08<br />
20nm Class<br />
Q4-08<br />
50nm Class<br />
Q1-09<br />
Q2-09<br />
Q3-09<br />
Q4-09<br />
Q1-10<br />
Q2-10<br />
Q3-10<br />
Q4-10<br />
Q1-11<br />
Q2-11<br />
Q3-11<br />
Q4-11<br />
Q1-12<br />
Q2-12<br />
Q3-12<br />
Q4-12<br />
Volume Production Dates<br />
Company A Company B Company C Company D<br />
Data based on publicly available information<br />
4
8388608<br />
2097152<br />
Memory Organization <strong>Trends</strong><br />
<strong>NAND</strong> block size is increasing<br />
524288<br />
131072<br />
32768<br />
8192<br />
2048<br />
512<br />
128<br />
32<br />
8<br />
Santa Clara, CA<br />
August 2012<br />
• Larger page sizes <strong>and</strong> more planes increase sequential throughput<br />
• More pages per block reduce die size<br />
• As ECC requirements increase, the spare area per <strong>NAND</strong> page is increasing<br />
Block size (B) Data Bytes per Operation Pages per Block<br />
5
Number of Electrons<br />
Santa Clara, CA<br />
August 2012<br />
Consumer-grade <strong>NAND</strong> <strong>Flash</strong>:<br />
Endurance <strong>and</strong> ECC <strong>Trends</strong><br />
� ECC improves data retention <strong>and</strong> endurance<br />
� Process shrinks lead to less electrons per floating gate<br />
� To adjust for increasing RBERs, ECC is increasing exponentially to<br />
achieve equivalent UBERs<br />
� ECC algorithms are transitioning from BCH to LDPC <strong>and</strong> codeword<br />
sizes are increasing<br />
10,000<br />
1,000<br />
100<br />
10<br />
MLC-2 SLC<br />
Endurance (cycles)<br />
100,000<br />
10,000<br />
1,000<br />
SLC Endurance MLC-2 Endurance<br />
SLC ECC MLC-2 ECC<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
ECC (bits)<br />
6
100.0<br />
80.0<br />
60.0<br />
40.0<br />
20.0<br />
Larger Page Sizes Improve<br />
Sequential Write Performance<br />
� For a fixed page size across process nodes, write throughput<br />
decreases as the <strong>NAND</strong> process shrinks<br />
� <strong>NAND</strong> vendors increase the page size to compensate for slowing<br />
array performance<br />
� Write throughput decreases with more bits per cell<br />
0.0<br />
Santa Clara, CA<br />
August 2012<br />
SLC MLC-2 MLC-3<br />
Data Bytes per Operation (Page Size * # of Planes)<br />
Sequential Programming Throughput (MB/s)<br />
7
Block Copy Time (ms)<br />
1000<br />
800<br />
600<br />
400<br />
200<br />
0<br />
32 /<br />
300<br />
64 /<br />
250<br />
More Pages Per Block Affect<br />
R<strong>and</strong>om Write Performance<br />
� As block copy time increases, r<strong>and</strong>om performance decreases.<br />
� Key factors that impact <strong>NAND</strong> <strong>Flash</strong> r<strong>and</strong>om write performance<br />
1. Number of pages per block<br />
2. Increase of tPROG<br />
3. Increase in I/O transfer time due to larger page sizes (effect not shown below)<br />
� Impact to system product r<strong>and</strong>om performance<br />
• Some card interfaces have write timeout specs at 250ms.<br />
• To improve r<strong>and</strong>om performance, block management algorithms manage pages<br />
or partial blocks.<br />
Santa Clara, CA<br />
August 2012<br />
SLC MLC-2 MLC-3<br />
64 /<br />
300<br />
128 /<br />
250<br />
256 /<br />
300<br />
128 /<br />
600<br />
128 /<br />
900<br />
256 /<br />
900<br />
Pages per Block / tPROG (typ)<br />
Block Copy Time (ms)<br />
256 /<br />
1200<br />
512 /<br />
1500<br />
192 /<br />
2000+<br />
384 /<br />
2000+<br />
8
Larger Monolithic <strong>NAND</strong> Densities<br />
Increase R<strong>and</strong>om Read Latencies<br />
� Most applications favor read operations over write operations<br />
� Most read operations are 4KB data sectors<br />
� As monolithic <strong>NAND</strong> density increases, less <strong>NAND</strong> die are being used for a fixed<br />
system density<br />
� As tPROG increases, the latency of r<strong>and</strong>om 4KB sector reads becomes more<br />
variable in mixed-operation environments as the probability of needing to read from a<br />
die that is busy increases<br />
<strong>NAND</strong> <strong>Flash</strong> TAM by Density (Units)<br />
100%<br />
80%<br />
60%<br />
40%<br />
20%<br />
0%<br />
2011 2012 2013 2014 2015 2016<br />
Santa Clara, CA USA<br />
August 2012<br />
Source: iSuppli 2Q12<br />
256Gb (32GB)<br />
128Gb (16GB)<br />
64Gb (8GB)<br />
32Gb (4GB)<br />
16Gb (2GB)<br />
8Gb (1GB)<br />
4Gb<br />
2Gb<br />
1Gb<br />
512Mb<br />
256Mb<br />
128Mb<br />
Read Latency (µs)<br />
4KB R<strong>and</strong>om Read Latency<br />
2000<br />
1500<br />
1000<br />
500<br />
0<br />
25 /<br />
250<br />
25 /<br />
300<br />
50 /<br />
600<br />
50 /<br />
900<br />
50 /<br />
1200<br />
tR / tPROG<br />
Min Latency Max Latency<br />
65 /<br />
1500<br />
9
MT/s<br />
<strong>NAND</strong> Interface <strong>Trends</strong> for High-<br />
Performance Applications<br />
Single Channel Package Dual Channel Package<br />
900<br />
800<br />
700<br />
600<br />
500<br />
400<br />
300<br />
200<br />
100<br />
0<br />
Santa Clara, CA<br />
August 2012<br />
ONFI 1.0 ONFI 2.x ONFI 3.0<br />
NV-SDR NV-DDR<br />
NV-DDR2<br />
� Applications<br />
• Transitioned to 200MT/s<br />
interface<br />
• Shifting to 400MT/s interface<br />
� Packaging<br />
• Typically BGA<br />
• 2 channel widely available<br />
• 4 channel being st<strong>and</strong>ardized<br />
� ONFI 3.0 compatible<br />
components are available<br />
10
Santa Clara, CA USA<br />
August 2012<br />
The Cloud’s Impact on <strong>NAND</strong><br />
System <strong>Architecture</strong>s<br />
Cloud:<br />
Long-term<br />
Data<br />
Client:<br />
Near-term<br />
Data<br />
11
Storage Comparison<br />
Client Storage<br />
• Information stored<br />
locally, on the device<br />
• Consumer or SSD grade<br />
<strong>NAND</strong> <strong>Flash</strong><br />
Santa Clara, CA USA<br />
August 2012<br />
Cloud Storage<br />
• Information stored in<br />
hosted server farms or<br />
data centers<br />
• SLC or Enterprise grade<br />
<strong>NAND</strong> <strong>Flash</strong><br />
12
Application<br />
Requirement<br />
Comparison of <strong>NAND</strong> <strong>Flash</strong> by<br />
Application Requirement<br />
Client Storage /<br />
Consumer<br />
Client Storage /<br />
SSD Grade<br />
Cloud Storage /<br />
Enterprise Grade<br />
<strong>NAND</strong> Cell MLC-2 � MLC-3 MLC-2 � MLC-3 SLC � MLC-2<br />
Endurance /<br />
Cycling<br />
Up to 3K Up to 3K Up to 100K (SLC)<br />
Up to 30K (MLC-2)<br />
DPM Consumer grade Better Best<br />
I/O Channel<br />
Throughput<br />
40 � 200 MT/s 40 � 400 MT/s 133 MT/s � 400<br />
MT/s<br />
UBER 1E-14 Less Less<br />
Data retention at<br />
max cycling<br />
<strong>NAND</strong> Package<br />
Placements<br />
Santa Clara, CA<br />
August 2012<br />
1 year 1 year Less<br />
Typically 1 to 4 4 to 16 Up to 32<br />
13
Application<br />
Requirement<br />
Higher system<br />
density<br />
How Do Enterprise Applications Meet<br />
Enterprise Requirements?<br />
Controller<br />
Ability to h<strong>and</strong>le many <strong>NAND</strong><br />
<strong>Flash</strong> – some controllers up to<br />
256 die<br />
More throughput Page-based block management,<br />
DRAM cache,<br />
Overprovisioning,<br />
More I/O channels,<br />
Faster I/O channels,<br />
Multiple ECC engines,<br />
Simultaneous, mixed operations<br />
Low latency<br />
reads<br />
Santa Clara, CA<br />
August 2012<br />
DRAM cache,<br />
Use smaller monolithic <strong>NAND</strong><br />
densities<br />
SSD/Enterprise-grade<br />
<strong>NAND</strong> <strong>Flash</strong><br />
Lower DPM<br />
Faster I/O channel<br />
14
Application<br />
Requirement<br />
Higher endurance /<br />
reliability<br />
More consistent<br />
use over time<br />
Power within<br />
budget for parallel<br />
operations<br />
Santa Clara, CA<br />
August 2012<br />
How Do Enterprise Applications Meet<br />
Enterprise Requirements? (Part 2)<br />
Controller<br />
SSD/Enterprise-grade<br />
<strong>NAND</strong> <strong>Flash</strong><br />
Higher ECC More ECC required,<br />
Lower UBER,<br />
Higher endurance<br />
Balanced block management to<br />
reduce write amplification <strong>and</strong><br />
provide even wear leveling so<br />
<strong>NAND</strong> die <strong>and</strong> blocks wear<br />
evenly<br />
Block management throttles<br />
parallelism as needed<br />
Peak power reduction<br />
15
ECC <strong>and</strong> Algorithms<br />
• Enterprise-grade <strong>NAND</strong> requires more ECC than consumer-grade<br />
<strong>NAND</strong> <strong>Flash</strong> to achieve higher endurance <strong>and</strong> lower UBER<br />
• Providing more ECC to a consumer-grade <strong>NAND</strong> <strong>Flash</strong> does not<br />
necessarily improve endurance, though it can improve data retention<br />
• ECC requirements are going to increase to the point that it will be a<br />
significant amount of real estate on a multi-channel controller<br />
1,000,000<br />
800,000<br />
600,000<br />
400,000<br />
200,000<br />
0<br />
Santa Clara, CA<br />
August 2012<br />
Logic Gates<br />
BCH (t=16) BCH (t=29) BCH (t=60) RS/TCM LDPC<br />
Logic Gates<br />
16
How to H<strong>and</strong>le Increasing ECC?<br />
� ECC is <strong>NAND</strong> <strong>Flash</strong> technology dependent <strong>and</strong> is implemented in hardware<br />
� Block management <strong>and</strong> drivers are not technology dependent <strong>and</strong> can be updated in<br />
software/firmware<br />
� ECC Free Solution<br />
• Tightly couples ECC to the <strong>NAND</strong> technology<br />
• Also covers <strong>NAND</strong> aggregation, reducing channel loading<br />
• Block management performed in processor can use DRAM buffer <strong>and</strong> results in a higher<br />
performance than a fully managed solution<br />
Santa Clara, CA<br />
August 2012<br />
17
Santa Clara, CA<br />
August 2012<br />
Questions?<br />
18
Other <strong>Micron</strong> Presentations<br />
� <strong>NAND</strong> uses in high performance platforms<br />
• Tutorial A-11 – Tuesday, August 21 st @ 8:30 am<br />
� <strong>NAND</strong> flash architecture <strong>and</strong> specification<br />
trends<br />
• Tutorial B-11 – Tuesday, August 21st @ 8:30 am<br />
� MLC media discussion<br />
• Tutorial C-11 – Tuesday, August 21st @ 8:30 am<br />
� Next-generation storage <strong>and</strong> the mobile<br />
computing ecosystem<br />
• Session 101-B – Tuesday, August 21st @ 8:30 am<br />
� Why ECC-free <strong>NAND</strong> is the best solution for<br />
high-performance applications<br />
• Session 102-A – Tuesday, August 21st @ 10:10 am<br />
� How ONFI st<strong>and</strong>ards are fueling highperformance<br />
SSDs<br />
• Session 102-C – Tuesday, August 21st @ 10:10 am<br />
� The need for differentiated MLC solutions<br />
• Tutorial F-21 – Wednesday, August 22 nd @ 8:30 am<br />
� Virtualized SSD storage for enterprise systems<br />
• Tutorial H-22 – Wednesday, August 22 nd @ 4:30 pm<br />
� Performance trade-offs of flash-based client<br />
storage solutions<br />
• Tutorial A-31 – Thursday, August 23 rd @ 8:30 am<br />
� Phase Change Memory – Panel Discussion<br />
• Session 302-D – Thursday, August 23 rd @ 9:50 am<br />
� 2.5-inch PCIe interface for enterprise flash<br />
cache – Panel Discussion<br />
• Session 303-B – Thursday, August 23 rd @ 3:10 pm
About Michael Abraham<br />
• Architect in the <strong>NAND</strong> Solutions<br />
Group at <strong>Micron</strong><br />
• Covers advanced <strong>NAND</strong> <strong>and</strong><br />
PCM interfaces <strong>and</strong> system solutions<br />
• IEEE Senior Member<br />
• BS degree in Computer Engineering from<br />
Brigham Young University<br />
©2007-2012 <strong>Micron</strong> Technology, Inc. All rights reserved. Products are warranted only to meet <strong>Micron</strong>’s production data sheet specifications. Information, products<br />
<strong>and</strong>/or specifications are subject to change without notice. All information is provided on an “AS IS” basis without warranties of any kind. Dates are estimates only.<br />
Drawings not to scale. <strong>Micron</strong> <strong>and</strong> the <strong>Micron</strong> logo are trademarks of <strong>Micron</strong> Technology, Inc. All other trademarks are the property of their respective owners.<br />
Santa Clara, CA<br />
August 2012<br />
20