We received the error code EOD76 on our server and found the orange led's light up on one drive. We replaced that drive about 3 weeks ago and we are now receiving the same error. We are going to replace the raid card as per most of the posts I have seen regarding this error. If we replace this card with an exact replacement will we have to rebuild the arrays or will we be able to save the data on our server?
Before you look at replacing the controller we may need to see if the controller is out of date and causing timeout failures.
What is the OS on the server so we can run a Dset report?
To answer your question though, you can replace the controller in the server and then boot to the controller bios (Ctrl_M) then select to View/Add array then view the data on the DRIVES and it should take their array data and bring it to the controller.
Let me know.
Dell | Social Outreach Services - Enterprise Get Support on Twitter @DellCaresPro Download the Dell Quick Resource Locator app today to access PowerEdge support content on your mobile device! (iOS, Android, Windows)
As Chris said, replacing your controller should not be your first troubleshooting step. Firmware is a big one for the PERC 4, as is HDD firmware. You may also simply have a bad slot - you can try rebuilding the drive in another slot to isolate it. You could also have a corrupt RAID array.
Chris provided the details on importing the RAID config after replacing the RAID card, but FYI - the PERC 4e/Di is NOT a card; it is on the riser in the system. The entire riser would need to be replaced by a compatible riser. You should also make every attempt to ensure that the firmware of the controllers is the same.
We are running server 2003, and this machine is also our main exchange server. It is imperative that we retain the data, or at least know to have a full working backup before we attempt to resolve.
Nothing that has been laid out carries additional risk to your data, but you should have a backup before doing anything, regardless of what you do. If you don't already have a backup, that should be your first order of business.
Here is the link to the Dset tool. www.dell.com/.../DriverDetails
Run the report and then respond to the email I am sending you.
I reviewed the dset you sent and it doesn't appear to be an issue with the controller. The controller is up to date and hasn't shown any signs of failure. The issue I see is that drive 0 fell offline due to interference on the array. After looking at the controller log I was able to identify that drive 12 is actually causing the issue with a reported bad block. On the SCSI controllers failing drives can cause issues across the controller due to "interference". What I would suggest is replacing drive 12 to start, then after the rebuild monitor the 2 arrays to see if the failure ceases to appear.