CRAZY Fast Computer!!!

Interesting, 6TB of storage with a $12,000 price tag and 2000MB of transfer isn't a bad deal. That said, they were using RAID-0 which has few uses....

Their tests were pretty ****ty..I mean...Windows Vista is not the ideal I/O test environment but I'm sure it impresses the average joe.

Of course..under high write situations...RAID-0...it wouldn't take long before it failed.

SSD is coming though...which really is great to see.
 
Interesting, 6TB of storage with a $12,000 price tag and 2000MB of transfer isn't a bad deal. That said, they were using RAID-0 which has few uses....

Their tests were pretty ****ty..I mean...Windows Vista is not the ideal I/O test environment but I'm sure it impresses the average joe.

Of course..under high write situations...RAID-0...it wouldn't take long before it failed.

SSD is coming though...which really is great to see.

If you're doing a speed demo, nothin' beats RAID-0. If you want something reliable, RAID-10.

And to be accurate, RAID-0 isn't RAID...it's AID, but the "R" is "Redundant"
 
SWEET! I loved the Crysis demo. Talk about one hell of a game rig!
 
We kinda knew this back in the mid 80s when we when bought RAM disks.


(Wait. Did he say something?)
 
If you're doing a speed demo, nothin' beats RAID-0. If you want something reliable, RAID-10.

And to be accurate, RAID-0 isn't RAID...it's AID, but the "R" is "Redundant"
Oh..I know what all the RAIDs are..which is why I was saying...It may have been fast RAID-0 but in most applications you won't be doing that.
 
The defrag was visually impressive, but other than that, is there any reason to bother defragmenting a SSD? Seems to me it would be of no benefit at all.

-Rich
 
IMHO, RAID 0 is not only suitable but a rather elegant architecture for the SSD type media. It leverages multiple wide IO buffers(multi memory controllers in effect), and it also has the security of being protected at the data layer by at least one, if not two P bits per block.

It's quite rare for RAM to suffer from unbuffered dropped data bits. yes, it happens but it's very rare.

It would not be suitable for mag media.
 
IMHO, RAID 0 is not only suitable but a rather elegant architecture for the SSD type media. It leverages multiple wide IO buffers(multi memory controllers in effect), and it also has the security of being protected at the data layer by at least one, if not two P bits per block.

It's quite rare for RAM to suffer from unbuffered dropped data bits. yes, it happens but it's very rare.

It would not be suitable for mag media.

"Protected?" RAID 0 is only striping. Is there really a parity bit?

You'd get the massive performance with RAID 1 where you can double whatever the bandwidth of the media when you split reads, for more random vs. sequential I/O, RAID 0+1 gets you both advantages.
 
"Protected?" RAID 0 is only striping. Is there really a parity bit?

You do understand that the 'SS' in SSD means solid state, as in DRAM? All RAM chips have parity built into them, unless of course, you specify non-parity, which I seriously doubt any SSD maker uses. That's why I mentioned that the parity would be at the data layer, as opposed to the block or stripe, as it were. In the event of a parity error on a chip, most OS/BIOS will report something like: 'Panic: R/C 54, Offset; 055769=0058CB4D3 DDRAM Parity Error'.

The other advantages that I didn't mention, but are a function of RAID 0 and SSD is the very low latency. We don't have to worry about spindle location(s) and seek/settle times compared to rotational mag media. in a RAID architecture using spinning media the latency plays a significant role when high write content data is in use. Some modern commercial disks are now using a dynamic caching algorithm on the disk chassis, but the majority of disks are read cache(naturally), so that successive writes can't destage efficiently due to the latency/seek/settle parameters, which aren't found in SSD.

Now, the one type of delay that isn't helped by using SSD is the read-modify-write penalty found in RAID 5. Of course, we weren't using 5, so that's immaterial.

RAID 0 is 'protected' at the data layer by the parity calculation once the write reaches the chip level. Of course, DDRAM, or any kind of non-volatile memory chip is susceptable to loss in the event of a power fault. One could get flash type memory and overcome that limitation, but the access times of non-volatile memory chips is many times slower than your typical DDRAM. Of course, once the bytes are assembled at the IO controller layer, getting them to the DRAM chip is subject to error, but that's a very low failure probability.

If I were designing an SSD system, I would use a hybrid of DDRAM for the controller cache to destage data from the IO controller fast, then I would use a battery system to keep alive in the event of a power fault. The main memory bay would consist of double parity non-volatile RAM(flash) type memory. This is typically what's found in modern cameras, and those little SD flash memory sticks. Dynamic R/W cache allocation, and quad, or octo 32 bit data paths to the server via a double board PCI+ IO.
 
Last edited:
You do understand that the 'SS' in SSD means solid state, as in DRAM? All RAM chips have parity built into them, unless of course, you specify non-parity, which I seriously doubt any SSD maker uses. That's why I mentioned that the parity would be at the data layer, ...

Oh, right, gotcha. I'm still thinking having RAID 1 redundancy and parity at that layer serves to handle even a DRAM parity error due to "cosmic rays."

And my thought that by having mirrored SSD on multiple I/O channels you can get I/O bandwidth that woudl probably approach the CPU bus bandwidth. Vista would SCREAM on that! :rolleyes:
 
Meh...SSDs aren't quite worth it yet in my opinion.

One of my next upgrades is going to be 2xWD 640 gig drives for RAID 0. Crazy speed on a budget.
 
Oh, right, gotcha. I'm still thinking having RAID 1 redundancy and parity at that layer serves to handle even a DRAM parity error due to "cosmic rays."

And my thought that by having mirrored SSD on multiple I/O channels you can get I/O bandwidth that woudl probably approach the CPU bus bandwidth. Vista would SCREAM on that! :rolleyes:

Well, another person caught in the cosmic ray OWT. We're still having some good guffaws about that after many years.

Mirroring can be done right, or it can be done wrong. Typically, in demos they have it set up for non-syncronous control, so the data is pounded to whatever side is ready first. Of course the other side will catch-up eventually but it's not truly RAID 1(or 10) if the mirroring isn't coherent. There's no speed gain in mirroring unless the RAID engine is setup for it from the get-go, and will off-load the double writes from the server bus. In fact, if the mirroring is controlled on the PCI card(common), the writes will always be slower.

In case you haven't noticed, I do this stuff for a living. I've seen it all, pretty much and some things work, some don't. The best legacy system I've ever seen was a Data General setup that has had several names and is now done by EMC called Clariion. It's not SSD, but it does just about everything right. Alas, it's EOL - sigh......
 
Well, another person caught in the cosmic ray OWT. We're still having some good guffaws about that after many years.
I know. I just posted that I used to be told that by vendors when I had a problem where they could only say, "It works here."

In case you haven't noticed, I do this stuff for a living. I've seen it all, pretty much and some things work, some don't. The best legacy system I've ever seen was a Data General setup that has had several names and is now done by EMC called Clariion. It's not SSD, but it does just about everything right. Alas, it's EOL - sigh......

Hey! I bought one of those myself 14 years ago and we used them up until a few years ago. There are probably plenty still in service until EMC eases us into stuff that cost a few $mil.

What I do for a living now is decide what will work best right up to sending the $1M order. I still can't have any say about my own laptop. :redface:
 
In case you haven't noticed, I do this stuff for a living. I've seen it all, pretty much and some things work, some don't. The best legacy system I've ever seen was a Data General setup that has had several names and is now done by EMC called Clariion. It's not SSD, but it does just about everything right. Alas, it's EOL - sigh......

You should probably stop by Overclock.net sometime! :yes:
 
You do understand that the 'SS' in SSD means solid state, as in DRAM?

In today's parlance, SSD now means a NAND flash drive. (Intel)

The other advantages that I didn't mention, but are a function of RAID 0 and SSD is the very low latency. We don't have to worry about spindle location(s) and seek/settle times compared to rotational mag media. in a RAID architecture using spinning media the latency plays a significant role when high write content data is in use. Some modern commercial disks are now using a dynamic caching algorithm on the disk chassis, but the majority of disks are read cache(naturally), so that successive writes can't destage efficiently due to the latency/seek/settle parameters, which aren't found in SSD.

I was reading last week some white papers about Hitachi's CinemaStar hard drives. Talked about how video needs to be streamed out, how tolerant it is to data delays (not very) and sometimes we must wait for the platter to make one more revolution before a data block is available. Made me realize a DVR is not a trivial thing to do...
 
In today's parlance, SSD now means a NAND flash drive. (Intel)



I was reading last week some white papers about Hitachi's CinemaStar hard drives. Talked about how video needs to be streamed out, how tolerant it is to data delays (not very) and sometimes we must wait for the platter to make one more revolution before a data block is available. Made me realize a DVR is not a trivial thing to do...

Except that the required video stream is trivial to the kind of rates any current SATA drive can hit. You can usually watch HD content in real time over 54Gbps WiFi, while SATA has the designed limit of 300GBs.

They keep saying you want purpose-made DVR hard drives. The modders repoirt that the non-DVR hard drives work fine. The differences are merely in the firmware. Admittedly for home use you want features like low power use, low noise and low heat.

I put the non-DVR Hitachi in an enclosure on my HD TiVo, but I've been buying the Western Digital Green AV-GP DVR drives after that. As above, I want to be green on power. Noise hasn't bothered me.
 
You do understand that the 'SS' in SSD means solid state, as in DRAM? All RAM chips have parity built into them, unless of course, you specify non-parity, which I seriously doubt any SSD maker uses. That's why I mentioned that the parity would be at the data layer, as opposed to the block or stripe, as it were. In the event of a parity error on a chip, most OS/BIOS will report something like: 'Panic: R/C 54, Offset; 055769=0058CB4D3 DDRAM Parity Error'.

The other advantages that I didn't mention, but are a function of RAID 0 and SSD is the very low latency. We don't have to worry about spindle location(s) and seek/settle times compared to rotational mag media. in a RAID architecture using spinning media the latency plays a significant role when high write content data is in use. Some modern commercial disks are now using a dynamic caching algorithm on the disk chassis, but the majority of disks are read cache(naturally), so that successive writes can't destage efficiently due to the latency/seek/settle parameters, which aren't found in SSD.

Now, the one type of delay that isn't helped by using SSD is the read-modify-write penalty found in RAID 5. Of course, we weren't using 5, so that's immaterial.

RAID 0 is 'protected' at the data layer by the parity calculation once the write reaches the chip level. Of course, DDRAM, or any kind of non-volatile memory chip is susceptable to loss in the event of a power fault. One could get flash type memory and overcome that limitation, but the access times of non-volatile memory chips is many times slower than your typical DDRAM. Of course, once the bytes are assembled at the IO controller layer, getting them to the DRAM chip is subject to error, but that's a very low failure probability.

If I were designing an SSD system, I would use a hybrid of DDRAM for the controller cache to destage data from the IO controller fast, then I would use a battery system to keep alive in the event of a power fault. The main memory bay would consist of double parity non-volatile RAM(flash) type memory. This is typically what's found in modern cameras, and those little SD flash memory sticks. Dynamic R/W cache allocation, and quad, or octo 32 bit data paths to the server via a double board PCI+ IO.
Uhm...I seriously doubt that these drives are using DRAM nor do they have some internal battery to retain data. I've worked with SSD drives a *lot* over the years as they have developed and I've seen very few using volitional DRAM. That basically defeats the purpose. I personally wouldn't be cool with my data being on DRAM with some internal battery keeping it alive. The day where that battery died would come soon and you'd be rather depressed to find an empty disk.

Do you have any knowledge of the above drive that suggests they use DRAM? Or are you just telling me I'm wrong about RAID-0 being a bad idea based on almost no information? It's ONE HELL of an assumption to assume that these SSD drives have parity data at the disk level, an incorrect assumption, at that. The positive about SSD failures is that it generally occours on a write instead of a read and the data already written is generally recoverable.

With all of the above in consideration -- if you care about the data whatsoever -- RAID-0 across 24 SSD drives using NAND flash memory would be rather foolish. That said, there *ARE* applications where you don't care about the data and just want fast read times.
 
You would be wrong. SSD typically are built just the way I've described, depending on how costly they are. Most people are talking about home stuff or hobby stuff(like overclock). The link above to the Intel SSD is typical in the industry. They(Intel) use a proprietary cache mgr called 'SOC'. That is basically a DRAM with index addressing to get the speed, while the flash carries the load. Other vendors use other names,
same spit, different day.

As for your comment on write parity errors, SSD suffers an infinitely lower rate of write errors than mag media. In mag media the problem is solved by read-behind-write to check the data on the platter. Then, once verified, flush the disk cache and keep moving. If it's desired, you can read-behind-write to SSD as well. Or, alternately at higher cost and concomitantly higher speed, we can do LRC plus CRC and write once then forget it. If it's corrupted at write time, we'll fix it on read. Keep track of the LBA, and if you get too many in a fixed time mark the block bad, relocate in the table and move on.

QED
 
You would be wrong. SSD typically are built just the way I've described, depending on how costly they are. Most people are talking about home stuff or hobby stuff(like overclock). The link above to the Intel SSD is typical in the industry. They(Intel) use a proprietary cache mgr called 'SOC'. That is basically a DRAM with index addressing to get the speed, while the flash carries the load. Other vendors use other names,
same spit, different day.

As for your comment on write parity errors, SSD suffers an infinitely lower rate of write errors than mag media. In mag media the problem is solved by read-behind-write to check the data on the platter. Then, once verified, flush the disk cache and keep moving. If it's desired, you can read-behind-write to SSD as well. Or, alternately at higher cost and concomitantly higher speed, we can do LRC plus CRC and write once then forget it. If it's corrupted at write time, we'll fix it on read. Keep track of the LBA, and if you get too many in a fixed time mark the block bad, relocate in the table and move on.

QED

lmfao.. yes..They do have DRAM which acts essentially as a cache. This does not mean that your data is protected by parity which is what you said above. Your data is not *STORED* on the DRAM.

I'm not sure why you won't just admit the fact that every SSD drive out there does not protect your data with parity as you originally stated? I said RAID-0 was stupid because your data was not protected. You told me that I was wrong because your data is protected with parity in the DRAM. I'm sorry, it's not, and with RAID-0 and the above SSD drive you do not have parity protection.

docmirror said:
IMHO, RAID 0 is not only suitable but a rather elegant architecture for the SSD type media. It leverages multiple wide IO buffers(multi memory controllers in effect), and it also has the security of being protected at the data layer by at least one, if not two P bits per block
Wrong. Your data is not stored in DRAM. RAID-0 is not that suitable if you care about the data.

docmirror said:
You do understand that the 'SS' in SSD means solid state, as in DRAM? All RAM chips have parity built into them, unless of course, you specify non-parity, which I seriously doubt any SSD maker uses. That's why I mentioned that the parity would be at the data layer, as opposed to the block or stripe, as it were. In the event of a parity error on a chip, most OS/BIOS will report something like: 'Panic: R/C 54, Offset; 055769=0058CB4D3 DDRAM Parity Error'.
Wrong. SS does not mean DRAM. Your data is not protected by parity with teh drive used in this demo . Could you point me to a usable sized SSD drive that has parity protection Doc?
 
Last edited:
lmfao.. Could you point me to a usable sized SSD drive that has parity protection Doc?

I could, but I charge $475/hr min two hours for consult. I've given enough away for free so far. :)
 
I could, but I charge $475/hr min two hours for consult. I've given enough away for free so far. :)
Sorry -- I'm not going to pay $950 to get you to admit you were wrong above. The discussion was about the solid state drives in this video along with the other ones being mass-produced. You called me out with incorrect information, that is all.
 
Back
Top