LabVIEW Embedded

cancel
Showing results for 
Search instead for 
Did you mean: 

SSD drive corruption with vision using LabView real-time?

Hi All, sorry in advance for the cross-post.

 

After running LV real-time doing some vision analysis and saving fail/pass images to an SSD drive for a few months we're seeing what we suspect might be drive corruption.  We don't have any hard clues, but we've noticed that if clone the drive to a new drive and boot off that drive instead, the problems magically go away.  If we put the original drive back in, the problems return.

 

Does this sound familiar to anyone and if so, would you have any recommendations as to where to start digging?

 

(And if the issue is a file system issue, are there any log files that we can check into?)

 

Thanks in advance,
Dan

0 Kudos
Message 1 of 4
(3,373 Views)

@dmccarty 

 

Sounds like a drive corruption.  Have you tried reformatting the drive and then putting the image back onto the drive?  If that reproduces the issue I'd say you have a bad drive.

Regards,

Ben Johnson
ʕง•ᴥ•ʔง
0 Kudos
Message 2 of 4
(3,252 Views)

Hi there, thanks for the reply.

 

We've not tried that fix exactly, but we have bought new drives and formatted them and put a new image on them.  That cycle has happened several times now, so it seems like there's an underlying cause other than several drives in a row randomly going bad.

 

At the moment we tried something new and replaced the drive (an SSD) with a spinning disc drive in hopes that it will be less susceptible to how Pharlap handles(d) reads/writes/trims.  If you have any other ideas please feel free to let me know.

 

Thanks.

0 Kudos
Message 3 of 4
(3,245 Views)

What drive brand and model is this? What is the typical number of images you write per day, how big? What is the hardware, computer model, brand, etc?

 

SSD do have a limited lifetime that is still significantly lower than that of HDs. Especially write access is a problem since SSD flash cells have a limited number of write cycles before they go into a state where they can't store enough charge to provide a safe signal anymore. The SSD controller tries to level that out by distributing the logical drive sectors to varying physical sectors on the fly to make each cell have on average a similar amount of write cycles, but at some point it still will stop to be reliable.

 

Also we have noticed that there are VERY different quality levels of SSDs depending on the brand. Trying to save a few dollars by buying an no-name or low-name brand quickly turns out into an expensive buy as you have to purchase new SSDs and all the time spent in reimaging or reinstalling everything, aside of the potential for permanent data loss. Laptops that were delivered with certain "brands" of SSDs were consistently breaking within 3 to 6 months. Replacing them with real brand SSD (usually Samsung) has in every case resulted in these same PCs keeping to run for over 3 years (which is the average time to replace laptop computers).

 

Other possibilities might be bad OS support for SSD that stresses them unnecessarily by treating them as HDs. But all modern desktop OSes should not have such problems anymore, although on Linux I think you might be able to run into problems if you configure your system in certain ways without knowing what you are doing.

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 4 of 4
(2,865 Views)