Current Time: 15:26:04 EST
 

IIS400 Maintenance – Resolved

Posted In: Maintenance — Apr 20th, 2014 at 7:14 pm EST by IX: Omari J.

Incident Description:

Our administrators have identified some server degradation which has stopped running web services on the server. The affected services are FTP, the control panel, and File Manager which in turn will cause these services not to function. All web sites will still be accessible. When more information becomes available the blogs post will be updated.

Which Customers are Impacted?

Customers on IIS400 will be affected

How are Customers Impacted?

FTP services, the control panel, and File Manager will not be working

How often will we be updated?

2 hours

Time to Resolution (ETA)

2 hours

Incident Updates

N/A

Resolution Description

Server is back online and operational.

iis517 – Webshell Down – Resolved

Posted In: Outage — Apr 19th, 2014 at 2:54 pm EST by IX: Angela A.

Incident Description:

Our administrators are investigating an internal issue with Webshell on this server. During this time, Webshell will not be accessible but all other services will be up.

Which Customers are Impacted?

All customers on iis517 using Webshell.

How are Customers Impacted?

Webshell will be unavailable.

How often will we be updated?

1 hour

Time to Resolution (ETA)

1 hour

Incident Updates

  • 2014/04/19 2:20PM EST - Administrators are investigating the issue

Resolution Description

PHP4 was fixed on the server and the issue is resolved. Webshell is now available.

IIS1013 Maintenance – Resolved

Posted In: Outage — Apr 13th, 2014 at 6:18 am EST by IX: Kristopher G.

Incident Description:

Our administrators will be performing server maintenance for iis1013. During this time FTP and Hsphere services will be inaccessible during this time, however sites will be up and accessible.

Which Customers are Impacted?

All customers on iis1013

How are Customers Impacted?

FTP and HSphere will be unavailable

How often will we be updated?

5 hours

Time to Resolution (ETA)

5 hours

Incident Updates

  • 2014/04/13 06:16 AM EST - Maintenance has started for iis1013.

Resolution Description

The maintenance has finished, and all affected accounts are working normally.

VPS Maintenance – Resolved

Posted In: Maintenance — Apr 13th, 2014 at 2:23 am EST by IX: Kristopher G.

Incident Description:

Our administrators will be performing planned maintenance on all VPS environments upgrading the firmware on the storage switches. During the maintenance window all services including websites, control panels, and email will be unavailable.  As soon as the maintenance is complete your services will resume.

Which Customers are Impacted?

All VPS customers are affected

How are Customers Impacted?

websites, control panels, and email will be unavailable.

How often will we be updated?

1 hour

Time to Resolution (ETA)

1 - 1.5 hours.

Incident Updates

  • 2014/04/13 2:00 AM EST - Maintenance has begun.
  • 2014/04/13 6:00 AM EST - All Linux nodes/containers are up. All Windows nodes/containers are up, however WVZ7 is still undergoing maintenance. New ETA - 1 - 1.5 hours.
  • 2014/04/13 6:58 AM EST - Maintenance is now complete. All nodes/containers are now back online.

Resolution Description

Maintenance is now complete. All nodes/containers are now back online.

IIS511 – maintenance – CHKDSK – RESOLVED

Posted In: Maintenance — Apr 08th, 2014 at 2:47 am EST by IX: Kristopher G.

Incident Description:

iis511  has undergone a reboot and entered into check disk (CHKDSK). The server will be unavailable during this process.

Which Customers are Impacted?

All customers on iis511 will be affected

How are Customers Impacted?

Sites may be down/slow

How often will we be updated?

1 hour

Time to Resolution (ETA)

1 hour

Incident Updates

  • 2014/04/08 02:45 AM EST - CHKDSK is on stage 2 of 3.
  • 2014/04/08 03:13 AM EST - Server is now online

Resolution Description

Server is now online

iis507 – Maintenance – Resolved

Posted In: Maintenance — Apr 06th, 2014 at 11:04 pm EST by IX: John B.

Incident Description:

Our system administrators have detected a a problem with the server, and are currently performing maintenance.  During the maintenance, some services will be disabled, but all sites & email will still be online.

Which Customers are Impacted?

All customers on iis507

How are Customers Impacted?

Customers on this server will experience the following errors:
  • Will not be able to enter Webshell
  • Will not be able to log in via FTP.
  • Will see quota errors in the control panel
All other services, such as websites and email, will be online during the maintenance.

How often will we be updated?

4 hours

Time to Resolution (ETA)

4 hours

Incident Updates

  • 2014/04/07 12:08 AM EST - Maintenance is almost complete. ETA 2 hours.
  • 2014/04/07 02:32 AM EST - Maintenance is complete now.

Resolution Description

Maintenance is complete now.

LVZ10 – Urgent Maintenance – Resolved

Posted In: Maintenance — Apr 03rd, 2014 at 3:20 am EST by IX: Kristopher G.

Incident Description:

Our VPS Administrators have identified an issue with the hardware node that is affecting all containers on this node and has required a reboot. All containers on this node during this reboot.

UPDATE:

Currently our administrators have identified that 8 containers are left to be restarted.

Which Customers are Impacted?

All customers on LVZ10

How are Customers Impacted?

All containers will be inaccessible during reboot.

How often will we be updated?

1 hour

Time to Resolution (ETA)

1 hour

Incident Updates

  • 2014/04/03 03:10 AM EST - 8 containers are left for reboot. ETA ~10minutes per container
  • 2014/04/03 04:04 AM EST - Maintenance is over and all containers are up.

Resolution Description

Maintenance is over and all containers are up.

IIS503 – maintenance – CHKDSK – RESOLVED

Posted In: Maintenance — Apr 01st, 2014 at 9:41 pm EST by IX Alan

Incident Description:

iis503  has undergone a reboot and entered into check disk (CHKDSK). The server will be unavailable during this process.

Which Customers are Impacted?

All customers on iis503 will be affected

How are Customers Impacted?

Sites may be down/slow

How often will we be updated?

1 hour

Time to Resolution (ETA)

3 hours

Incident Updates

  • 2014/4/1 10:25 PM EST - Checkdisk finished on this server. All services are up.

Resolution Description

The Checkdisk is now complete, and all affected accounts are operating as normal.

IIS511 – maintenance – CHKDSK – RESOLVED

Posted In: Maintenance — Apr 01st, 2014 at 9:38 pm EST by IX Alan

Incident Description:

iis511  has undergone a reboot and entered into check disk (CHKDSK). The server will be unavailable during this process.

Which Customers are Impacted?

All customers on iis511 will be affected

How are Customers Impacted?

Sites may be down/slow

How often will we be updated?

1 hour

Time to Resolution (ETA)

3 hours

Incident Updates

  • 2014/4/1 9:50 PM EST - Checkdisk is verifying security descriptors. Stage 3 of 3. Approximate ETA 2 hours.
  • 2014/4/1 11:15 PM EST - Disk check has finished. All services are back up and running.

Resolution Description

Checkdisk has finished, all accounts affected should be back up and running as normal.

DDoS – Cloud network – Resolved

Posted In: Outage — Mar 27th, 2014 at 11:54 am EST by IX: Jared E.

Incident Description:

Some of our Cloud customers were experiencing slowness on our cloud network due to an incoming DDoS.  The DDOS has been resolved at this time but during this 15 accounts needed to be rebooted afterwards.

 

The following servers are back online.

cloud.datavizion.biz
cloud.excellentdns.com-11.5+CF_TEST
cloud.iContinuum.ca
cloud.interaction-science.com
cloud.babynameguide.com
cloud.educationaloutfitters.com
act.lhai.com
cloud.bmg.com
cloud.afrenterprises.com2
cloud.hoffmanamps.com
cloud.hotelredcanal.com
cloud.lesotho.com
cloud.websuccess.com
cloud.wudtools.com
cloud1.northcoastcapitalcorporation.com
election1.solutionsbyweb.com
proba.sourceverse.com
test.beckshoes

 

 

Which Customers are Impacted?

Customers on our Cloud platform

How are Customers Impacted?

Customers may experience slowness in email, database, website, or applications on their cloud servers

How often will we be updated?

As more information becomes available

Time to Resolution (ETA)

N/A

Incident Updates

  • 2014/02/27 11:55 AM EST - Our system administrators are investigating the issue, everything appears to be getting better at the moment.
  • 2014/02/27 12:19PM EST - Our system administrators have icolated and resovled the DDOS. Due to the DDOS we have 15 accounts that need to be rebooted. A list of what servers are back online are listed at the top of this post.
  • 2014/02/27 12:56PM EST - Everything has been rebooted and should be back online at this time. We will continue to monitor the situation to ensure everything is stable.

Resolution Description

The DDOS has been mitigated, and all affected accounts are still stable after the reboot.  If you are still experiencing issues, please don't hesitate to contact us.

Mail61 – Maintenance – Resolved

Posted In: Outage — Mar 23rd, 2014 at 5:49 pm EST by IX: John B.

Incident Description:

Mail61 is experiencing intermittent downtime.  Our system administrators are currently investigating.

Which Customers are Impacted?

All customers using email on Mail61

How are Customers Impacted?

Customers on Mail61 will be unable to send or receive email for the duration.

How often will we be updated?

1 hour

Time to Resolution (ETA)

N/A

Incident Updates

N/A

Resolution Description

The server is back online, and all affected mail services are working as normal.

IIS511 – maintenance – CHKDSK – Resolved

Posted In: Outage — Mar 20th, 2014 at 12:56 pm EST by IX: Ryan D.

Incident Description:

iis511  has undergone a reboot and entered into check disk (CHKDSK). The server will be unavailable during this process.

Which Customers are Impacted?

All customers on iis511 will be affected

How are Customers Impacted?

Sites may be down/slow

How often will we be updated?

3 hours

Time to Resolution (ETA)

20 hours

Incident Updates

  • 2014/3/20 3:58 PM EST - The CHDSK has completed and the server is now under RAID verification. Current status is 4%.
  • 2014/3/20 6:24 PM EST - RAID verification is now at 5%
  • 2014/3/20 7:54 PM EST - RAID verification is now at 17%
  • 2014/3/20 10:04 PM EST - RAID verification is now at 44%
  • 2014/3/21 2:22 AM EST - RAID verification is now at 76%
  • 2014/3/21 7:20 AM EST - RAID verification is now at 80%

Resolution Description

The RAID verification is now complete, and all affected accounts are operating as normal.

Cloud Service Slowness – Resolved

Posted In: Outage — Mar 18th, 2014 at 12:38 pm EST by IX: Omari J.

Incident Description:

Our system administrators detected an issue with Cloud servers this morning that was causing them to be inaccessible or very slow for customers. We are currently investigating in order to determine the root cause of this issue. Once we have more details the blog will be updated.

Update:  The symptoms appear to have been caused by a massive spike in writes to the storages used by the affected Cloud servers.  While that initial spike has abated, there are some writes yet queued by the storage management system.  Services will continue to improve as the queues are processed.  Our engineers are still investigating the root cause of the spike.

Update 2: 3/19/14 5:30AM: The environment continues to balance load to the new SAS storage array and System Administrators have been able to further streamline this process to get all servers operating normally as quickly as possible.  The majority of affected accounts should have sites that are up and working though some customers will still experience slowness in their WHM management console until the full rebalance is completed.

Update 3: 3/19/14 5:25PM: Systems Administrators have confirmed that services have stably returned to normal.  If you are still experiencing server slowness, please contact our support team at 1(877)776-9642 or internationally at 1(614)534-1973.

Which Customers are Impacted?

Some Cloud customers will be affected.

How are Customers Impacted?

Websites, email, databases and other services may be slow to respond.

How often will we be updated?

As new information is available.

Time to Resolution (ETA)

Pending investigation.

Incident Updates

  • 12:57 PM EDT - Most customer servers have returned to normal operation. We are still investigating the cause.
  • 1:06 PM EDT - Incident Description has been expanded above.
  • 1:13 PM EDT - Our engineers are adding an additional SAS storage group to the SAN to mitigate impact of future I/O spikes.  The additional SAS array should be available to add to the SAN in the next couple hours.
  • 2:13PM EDT - Our engineers have discovered several Virtual Machines (VMs)that had become 'frozen' or 'stuck' due to the issue and needed to be rebooted.  In some of these cases the server may have forced a File System Check, which has resulted in some downtime for a select few VMs.
  • 2:45PM EDT - We have temporarily disabled the functionality to restart VMs from within the my.ixwebhosting.com control panel as any VMs that are rebooted at this time will likely force a File System Check resulting in more downtime for that VM.
  • 4:00PM EDT - We have grouped another high-speed SAS Storage Array to the Storage Area Network (SAN).  This will provide an additional environment to move frequently accessed data to and give us time to investigate the cause and implement additional fixes.  All in all, this adds over 5TB of high-speed storage to the SAN.  As we are able to add data to this new storage array, we will continue to see performance improvements across the entire SAN.
  • 4:40PM EDT - We are still monitoring performance on the SAN .  We expect to see improvements as more and more data is moved to the faster storage.
  • 7:03PM EDT -  Performance is improving currently for the servers affected. We will continue to update as we receive more information.
  • 9:52PM EDT - Our Head of Infrastructure reports that 35 of the 40 servers that needed to be rebooted for this performance issue are now online.  The remaining servers are in the process of file system checks which are taking longer than normal due to the increased load on the system.  The servers that are up may still experience slower than normal response times for the next few hours while the rebalance completes.
  • 11:15PM EDT - Our Head of Infrastructure reports that all 40 servers that needed to be rebooted for this performance issue are now online.  Some servers that are up may still experience slower than normal response times, and may even appear down, for the next few hours while the rebalance completes.  The high-speed SAS Storage Array has also completed rebuild and verification of the RAID and performance should begin to improve at a faster rate.
  • 03/19/14 5:30AM EDT - Incident Description has been updated above.
  • 03/19/14 11:15AM EDT - Our engineers are continuing to monitor performance in the environment and it has been steadily improving over time.
  • 3/19/14 5:25PM - Systems Administrators have confirmed that services have stably returned to normal.  If you are still experiencing server slowness, please contact our support team at 1(877)776-9642 or internationally at 1(614)534-1973.

Resolution Description

Systems Administrators have confirmed that services have stably returned to normal.  If you are still experiencing server slowness, please contact our support team at 1(877)776-9642 or internationally at 1(614)534-1973.

iis1010 – RAID rebuild – Resolved

Posted In: Maintenance — Mar 15th, 2014 at 6:51 am EST by IX: Antonio S.

Incident Description:

Our system administrators have detected an issue with this server. The server’s RAID is being rebuilt. During this time the web server may experience slowness.

Which Customers are Impacted?

All clients using this web server.

How are Customers Impacted?

All server functions may experience slowness.

How often will we be updated?

N/A

Time to Resolution (ETA)

N/A

Incident Updates

  • 2014/03/15 6:45 AM EST -  Current status RAID rebuild 65%.
  • 2014/03/15 3:15 PM EST -  The RAID rebuild is currently at  68%.
  • 2014/03/15 6:30 PM EST -  The RAID rebuild is currently at  70%.
  • 2014/03/15 9:15 PM EST -  The RAID rebuild is currently at  71%.
  • 2014/03/16 3:39 AM EST -  The RAID rebuild is currently at  73%.
  • 2014/03/16 6:17 AM EST -  The RAID rebuild is currently at  74%.
  • 2014/03/17 3:09 PM EST - The RAID rebuild is currently at 78%. Another update will be posted when more information becomes available.
  • 2014/03/17 1:39 AM EST -  The RAID rebuild is currently at  97%.
  • 2014/03/18 2:08 AM EST - The RAID rebuild is complete and server is online.

Resolution Description

The server is back online and all server functions have returned to normal.

Server Connectivity Issues – Resolved

Posted In: Outage — Mar 02nd, 2014 at 4:27 am EST by IX: Antonio S.

Incident Description:

100% of the affected email and database servers are now back online.  If you are still experiencing any problems with your email or databases, please contact our support team.  We’ve completed our formal investigation into the cause of the problem, and have published the post-issue report which can be found here: http://www.ixwebhosting.com/blog/2014/03/ix-outage-post-issue-technical-report/

Sunday morning at 3:49am EST, we discovered a problem with the RAID system on one of our storage arrays. We determined that migrating the data from the failed array to new locations is the best way we can get everything back up and running as quickly as possible.

Simplified description of the issue:

  • RAID is a type of data storage system that’s designed to handle drive failures by automatically re-creating the failed drive; but when multiple RAID drives fail at once, in too close succession to each other, servers go down.  This extremely unlikely event is what occurred on Sunday night, and we are still investigating its root cause.
  • The recreation takes a long time for each server because of the delicate state the RAID array is in.  Our engineers are painstakingly going through all data, ensuring that it is in good condition, fixing any issues, and then migrating it to tested drives.  Extreme caution must be taken to avoid heavy loads, which may cause the array to have additional failures.
  • It’s too risky to attempt a repair of the array in this delicate state until after the data has been migrated, as another failure could cause data loss.

Technical description of the issue (updated with further clarification):

  • The affected cluster of servers uses a SAN, which is made up of 5 storage arrays in a tiered configuration. Each storage array consists of 14 enterprise-grade 15,000 RPM SAS iSCSI-connected hard drives with 2 hot spares. This is a ‘14+2 RAID 50’ storage array.
  • During regular integrity scans, the RAID controller for one of the 5 storage arrays recognized degraded service on Drive 6. So, as designed, it automatically activated a hot spare, Drive 10, and began rebuilding the RAID array. Several hours after this rebuild began, Drive 0 failed which prevented the rebuild on Drive 10 from completing due to insufficient parity data on the parity group. Two unusable drives in the same parity group exceeded the fault tolerance of this storage array. This is the point where the outage began – up until when the source drive (Drive 0) failed, all servers in this cluster were online and functional.
  • Working with our hardware vendor’s tier 4 engineers, we were able to coax both Drive 0 back into active status while Drive 6 remained degraded. Which satisfied fault tolerance thresholds, one degraded drive per parity group, and allowed the storage array to be activated. This got the SAN back online, and we were able to start transferring your data from the affected volumes to stable ones.
  • Due to the very fragile nature of the array and it’s dependency on Drive 0 remaining active (which may crash anytime, causing a loss of all data on the array), our hardware vendor’s senior tier 4 engineer stated that the evacuation process must be handled one volume (one server) at a time, in sequence. At this point, additional engineers or hardware cannot influence the speed of the recovery process. Evacuating multiple volumes would make the process go faster but could result in another fault and possibly data loss.

At this time, we believe that we will be able to restore most of the servers and their data – either your live data, or from a backup.  It is possible that some data was lost, but so far we have been able to recover 100%.

We’re still working around the clock to get the remaining servers back online. Once we’re 100% up and running, we will be conducting an in-depth investigation and publishing a post-issue report on our main blog. This report will include the root cause of these issues, as well as documentation outlining the steps we’ll be taking to prevent any future incidents of this nature.

When the outage began, email sent to affected servers started bouncing back to the sender with a notification that it was not delivered.  Systems administrators blocked port 25 on those mail servers very early Monday morning so that any email received since that time is now queued up.  We expect that all queued mail will be delivered once the mail server returns to service and it could take 24 hours or more to complete delivery.  One exception is mail907, due to an error this server was missed during the port 25 block.  All mail already on that server is intact and all mail sent to this server was bounced back to the sender with a notification that it was not delivered.  Admins have checked all remaining mail servers and have confirmed that mail907 was the only mail server this error happened on.

There is a way to determine your email server through the control panel, but it’s easier to just go to mxtoolbox.com, enter in your domain name and click the MX Lookup button.

To find your database server:

1.
 Login to your HE Web Hosting account (https://my.hostexcellence.com/)
2. Go to ‘My Products,’ then click ‘Manage’ under your hosting product
3. Click the MySQL or PgSQL Server Icon under “Databases”
4. The name will be listed next to the ‘Host Name’ at the top

All servers are now ONLINE:

  • All Control Panels (CP9, 10, 11)
  • pgsql1101
  • mysql901-mysql902
  • mysql905-mysql906
  • mysql912-mysql914
  • mysql1002-mysql1003
  • mysql1103
  • mysql1006-mysql1007
  • mysql1101
  • mail901-mail920
  • mail1001-mail1006
  • mail1008-mail1016
  • mail1018
  • mail1101-mail1110
  • mail1112-mail1114
  • mail1116
  • mail1121-mail1124
  • mail1126
  • mysql1126-mysql1128
  • mysql919
  • mysql908
  • mysql917

We are going to continue to monitor this situation and complete an overview of all servers, to make sure everything is functioning properly. We greatly appreciate the patience and understanding with this issue, and will provide any other details we have as they become available. Thank you.

Which Customers are Impacted?

Some customers on CP9, 10, and 11.  To determine if you are impacted, please refer to the list of servers provided above.

How are Customers Impacted?

Customers will not be able to connect to affected mail and/or database servers.

How often will we be updated?

As often as possible.

Time to Resolution (ETA)

ETAs are supplied per server in the list above.  Times are approximate because they are based on averages.  ETA will be updated frequently as new servers come online.

Incident Updates

  • 2014/03/02 05:20 AM EST - We have determined that one of the storage environments went offline. We are still looking into the reason for this. At this time all web servers are up but the issue is still affecting email and database servers.
  • 2014/03/02 07:10 AM EST - We are still investigating and are contacting our hardware provider to help troubleshoot the issue.
  • 2014/03/02 08:25 AM EST - We are continuing to work with our hardware providers and their senior engineers.  We have no new information at this time, but will be sure to update you as soon as we do
  • 2014/03/02 9:38 AM EST - We are still in contact with our hardware service provider and are collaborating with their Senior Engineer to attempt to get the storage network back online.
  • 2014/03/02 10:45 AM EST - The issue was determined to be a multiple drive failure with more drives failing than the RAID was designed to handle.  Working with the hardware service provider, we were able to get one of the SANs operating again but in a diminished state.  We are now determining the best methods to move data from the restored drive to stable non-failed members so that we can restore the RAID and bring servers back on line.
  • 2014/03/02 12:49 PM EST - Our administrators are directly working with our hardware provider to resolve this issue. We still have no ETA, but we will update as more information becomes available.
  • 2014/03/02 2:43 PM EST - We are still working with our provider. They are running tests to move some storage to non-failed drives. Once we confirm that this process works, we can begin to bring the servers online. This process will be slow, and updates will be sparse until the servers start coming online.
  • 2014/03/02 5:56 PM EST - Our admins have the storage online, but in a fragile state. We are copying the data, and will begin bringing servers up shortly. Please stay tuned to the Status Blogs for specific server information.
  • 2014/03/02 5:56 PM EST - pgsql1101 is back online! Other servers will be coming on shortly. We will keep you updated.
  • 2014/03/02 7:44 PM EST - The migration of mail1001 is currently at 55%
  • 2014/03/02 8:54 PM EST - Mail1001 is back online
  • 2014/03/02 9:06 PM EST - Mysql1002 is back online
  • 2014/03/02 9:08 PM EST - Mail1010 is back online
  • 2014/03/02 9:14 PM EST - Mail1002 is back online
  • 2014/03/02 11:46 PM EST - Mail1003 is back online
  • 2014/03/03 2:29 AM EST - mail1004 is back online
  • 2014/03/03 4:57 AM EST - mysql901 is back online
  • 2014/03/03 5:42 AM EST - mysql902 is back online
  • 2014/03/03 6:43 AM EST - mysql905 is back online
  • 2014/03/03 8:15 AM EST - mysql906, and mysql912 are back online. Our administrators are now putting a priority on email servers.
  • 2014/03/03 10:52 AM EST - Mail901 is back online.
  • 2014/03/03 2:18 PM EST - Mail902 is back online.
  • 2014/03/03 2:56 PM EST - All Control Panels are back up and confirmed working This means that all customers can access their hosting CP, and manage hosting. Many Mail and MySQL servers are still down.
  • 2014/03/03 4:14 PM EST - Mail912 is back online.
  • 2014/03/03 4:14 PM EST - MySQL1103 is back online.
  • 2014/03/03 5:16 PM EST - Mail1102 is back online.
  • 2014/03/03 6:15 PM EST - Mail1101 is back online.
  • 2014/03/03 7:14 PM EST - Mail1103 is back online.
  • 2014/03/03 8:15 PM EST - Mail903 is back online.
  • 2014/03/03 9:30 PM EST - Mail1104 is back online.
  • 2014/03/03 10:40 PM EST - Mail904 is back online.
  • 2014/03/04 12:08 AM EST - mail1105 is back online.
  • 2014/03/04 03:08 AM EST - mail905 is back online.
  • 2014/03/04 05:40 AM EST - mail906 is back online.
  • 2014/03/04 08:15 AM EST - mail909 is back online.
  • 2014/03/04 11:08 AM EST - mail911, mysql913, and mail1005 are back up.
  • 2014/03/04 12:21 PM EST - mail910 is back online.
  • 2014/03/04 12:48 PM EST - mail1106 is back online.
  • 2014/03/04 2:35 PM EST - mail1006 is back online.
  • 2014/03/04 2:50 PM EST - mail1108 is back online.
  • 2014/03/04 3:42 PM EST - mail1109 is back online.
  • 2014/03/04 5:49 PM EST - mail1007 is back online.
  • 2014/03/04 7:38 PM EST - mail1008 is back online.
  • 2014/03/04 8:01 PM EST - mail1110 is back online.
  • 2014/03/04 9:30 PM EST - mail1009 is back online.
  • 2014/03/04 11:16 PM EST - mail1112 is back online.
  • 2014/03/04 11:46 PM EST - mail907 is back online.
  • 2014/03/05 02:00 AM EST - Mail servers mail1011 and mail1113 are online.
  • 2014/03/05 03:40 AM EST - mail908 is online.
  • 2014/03/05 06:00 AM EST - mail1012 is online..
  • 2014/03/05 07:30 AM EST - mail1114 is online.
  • 2014/03/05 09:37 AM EST - mysql1128 is online.  mail913 is next
  • 2014/03/05 10:19 AM EST - mail913 and mysql1127 are online.
  • 2014/03/05 10:19 AM EST - mysql919 is back online.
  • 2014/03/05 1:40 PM EST - mail1013 and mail1116 are back online. mail914 is next
  • 2014/03/05 3:51 PM EST - mysql1126, mail914, and mail1121 are back online.
  • 2014/03/05 8:42 PM EST - mail1014, mail915, mail1015, mail1124, mail1018, are back online.  mail1122 and mail916 are currently in progress.
  • 2014/03/05 9:53 PM EST - Mail1122 is back online.
  • 2014/03/05 10:45 PM EST - mail916 is online.  mail1016 is next.
  • 2014/03/06 12:12 AM EST - mail1016 is online.  mail1123 is next.
  • 2014/03/06 12:37 AM EST - mail1123 is online.  mail917 is next.
  • 2014/03/06 01:48 AM EST - mail917 is online.  mail1017 is next.
  • 2014/03/06 02:56 AM EST - Servers mail1017 and mail1124 are online.  mail918 is next.
  • 2014/03/06 03:50 AM EST - mail918 is online.  mail1125 is next.
  • 2014/03/06 04:50 AM EST - mail1125 is online.  mail919 is next.
  • 2014/03/06 07:45 AM EST - mail919 is online.  mail920 is next.
  • 2014/03/06 08:50 AM EST - mail920 is still being moved. We have found that mysql1003 and mysql1101 have had a small amount of data that was missed in the first migration. They have been taken back offline to finish the migrations. They will be processed after mail920 is finished.
  • 2014/03/06 09:50 AM EST - mail920 is now online. All mail servers are finished at this time. mysql1003 is next.
  • 2014/03/06 10:51 AM EST - mysql1003 and mysql1101 are now online. mysql1006 is in progress.
  • 2014/03/06 01:52 PM EST - mysql1006, mysql1106, mysql1007, mysql1107, mysql1010, and mysql1108 are now online.  Mysql1011 is next.
  • 2014/03/06 03:04 PM EST - mysql1011, mysql1110, mysql908, and mysql914 are now online.  Mysql1014 is next.
  • 2014/03/06 05:04 PM EST - mysql1014, mysql1015, mysql1113, and mysql915 are back online.
  • 2014/03/06 05:50 PM EST - mysql1016 and mysql1116 are back online.  Mysql917 is the last server, and will be brought up soon.
  • 2014/03/06 06:40 PM EST - Mysql917 is back online, all servers have been completed.

Resolution Description

N/A

 
© 2011 IX Web Hosting.