Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 15/01/2024. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (22:13): The LDeX1-Plesk6 server has been rebooted and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 15/01/2024 21:00 - 15/01/2024 23:59
  • Last Updated - 15/01/2024 22:16
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk8
  • We will be performing a reboot of the LDeX1-Plesk8 server between 21:00 and 23:59 on 15/01/2024 in order to resolve an issue which is preventing us from taking backups.

    Update (22:01): The LDeX1-Plesk8 server has been rebooted and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 15/01/2024 21:00 - 15/01/2024 23:59
  • Last Updated - 15/01/2024 22:14
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel6
  • We will be performing a reboot of the LDeX1-cPanel6 server between 21:00 and 23:59 on 04/12/2023 in order to resolve an issue which is preventing us from taking backups.

  • Date - 04/12/2023 21:00 - 04/12/2023 23:59
  • Last Updated - 15/01/2024 09:58
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk8
  • We will be performing a reboot of the LDeX1-Plesk8 server between 21:00 and 23:59 on 04/12/2023 in order to resolve an issue which is preventing us from taking backups.

  • Date - 04/12/2023 21:00 - 04/12/2023 23:59
  • Last Updated - 15/01/2024 09:58
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be taking one the inter-datacenter backhaul circuits between Iomart DC12 (formerly LDeX1) in London and Iomart DC13 (formerly LDeX2) in Manchester on our core network out of service at 21:00 on 16/11/2023 in order to clean the fibres. 

    Our network is designed to withstand the loss of any one circuit and continue to function as normal, so this maintenance work should not be service affecting.

    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, as with all network maintenance of this nature there is the risk of unexpected issues, so the network should be considered to be at-risk of disruption for the duration of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

     

  • Date - 16/11/2023 21:00 - 16/11/2023 23:59
  • Last Updated - 04/12/2023 12:11
Unexpected reboot of VPS node (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS6
  • The LDeX1-VPS6 node has unexpectedly rebooted. All VPS on this node are currently unavailable.

    Update (14:45): The node is back online and VPS are booting

    Update (15:07): Some VPS haven't come back online. We are investigating why.

    Update (15:32): We believe that there is a compatibility issue with some of the virtualisation software which is causing issues with VPS that have more than 4GB of RAM. We are downgrading these packages to an older version that we know works. This requires us to reboot the node and so all VPS will be offline.

    Update: (15:40): The node is back online again and VPS are booting.

    Update: (15:45): VPS appear to be running stably so far. We are monitoring closely.

    Update: (15:50): All VPS are running and everything has remained stable so far.

    Update (15:55): Everything has remained stable so we think this is now resolved. We will continue to keep a close eye on the node in case of any further issues however. If you are still seeing any problems with your VPS, then please get in touch with us via support@freethought.uk or raise a ticket in the customer portal.

  • Date - 15/10/2023 14:35 - 15/10/2023 15:45
  • Last Updated - 15/10/2023 15:56
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 12/07/2023 in order to resolve an issue which is preventing us from taking backups.

    Update (22:37): We are rebooting the LDeX1-Plesk5 server.

    Update (22:44): The LDeX1-Plesk5 server is back online and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 12/07/2023 21:00 - 12/07/2023 23:59
  • Last Updated - 13/07/2023 09:12
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 28/06/2023 in order to resolve an issue which is preventing us from taking backups.

    Update (22:54): We are rebooting the LDeX1-cPanel6 server.

    Update (22:59): The LDeX1-cPanel6 server is back online and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 28/06/2023 21:00 - 28/06/2023 23:59
  • Last Updated - 28/06/2023 23:07
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel6
  • We will be performing a reboot of the LDeX1-cPanel6 server between 21:00 and 23:59 on 28/06/2023 in order to resolve an issue which is preventing us from taking backups.

    Update (22:54): We are rebooting the LDeX1-cPanel6 server.

    Update (22:59): The LDeX1-cPanel6 server is back online and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 28/06/2023 21:00 - 28/06/2023 23:59
  • Last Updated - 28/06/2023 23:06
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk7
  • We will be performing a reboot of the LDeX1-Plesk7 server between 21:00 and 23:59 on 10/05/2023 in order to resolve an issue which is preventing us from taking backups.

    Update (10:31 10/05/2023): We are rebooting the LDeX1-Plesk7 server

    Update (10:33 10/05/2023): The LDeX1-Plesk7 server is back online

    Update (09:03 11/05/2023): We have confirmed that the backups for the LDeX1-Plesk7 server are working normally again.

  • Date - 10/05/2023 21:00 - 10/05/2023 23:59
  • Last Updated - 11/05/2023 09:04
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be performing a reboot of the LDeX1-Plesk4 server between 21:00 and 23:59 on 10/05/2023 in order to resolve an issue which is preventing us from taking backups.

    Update (10:31 10/05/2023): We are rebooting the LDeX1-Plesk4 server

    Update (10:33 10/05/2023): The LDeX1-Plesk4 server is back online

    Update (09:03 11/05/2023): We have confirmed that the backups for the LDeX1-Plesk4 server are working normally again.

  • Date - 10/05/2023 21:00 - 10/05/2023 23:59
  • Last Updated - 11/05/2023 09:04
Customer Portal Upgrades (Resolved)
  • Priority - Low
  • Affecting System - Customer Portal
  • We will be performing routine maintenance and software upgrades to the Freethought customer portal starting at 20:00 on Wednesday 26th April. We expect the actual impact to be minimal and last only a very short while, although have allowed 2 hours to complete the work.

  • Date - 26/04/2023 20:00 - 26/04/2023 22:00
  • Last Updated - 10/05/2023 19:35
DDoS attack (Resolved)
  • Priority - High
  • Affecting System - LON2-FWCL1
  • We are investigating a DDoS attack which is affecting services behind the LON2-FWCL1 firewall cluster.

    Update (18:45 28/01/2023): We have mitigated the attack and normal service has resumed. We are continuing to monitor the network in case of further issues. Please accept our apologies for the inconvenience.

    Update (19:15 28/01/2023): We have identified and resolved a side effect of our mitigation of the DDoS attack which was impacting DNS traffic.

    Update (20:30 28/01/2023): We have made further changes to how we are blocking the DDoS traffic in order to resolve issues seen by some customers. As a result of this, all traffic to the shared IP address on the LDeX1-Plesk8 server is currently being blocked.

    Update (21:45 28/01/2023): We have made some changes and the LDeX1-Plesk8 server is available again, albeit with degraded performance due to the volume of traffic.

    Update (22:15 28/01/2023): We think we have blocked most of the malicious traffic and so the LDeX1-Plesk8 server is working again now.

    Update (13:31 30/01/2023): The attack subsided in the early hours of Sunday the 29th and we haven't seen any further signs of malicious traffic since then, so we are marking this as resolved.

  • Date - 28/01/2023 18:07
  • Last Updated - 30/01/2023 13:32
Emergency server reboot (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk6
  • We are performing an emergency reboot of the LDeX1-Plesk6 server in order to apply Microsoft Windows updates to patch a critical security issue announced today.

    Update (10:37): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 14/12/2022 10:34 - 14/12/2022 10:37
  • Last Updated - 14/12/2022 10:41
DC12 network issue (Resolved)
  • Priority - Critical
  • Affecting System - DC12 (LDeX1) services
  • We are investigating a network issue affecting services in DC12 (formerly LDeX1)

    Update: There is an issue with LON2-ESW1 affecting IPv4 traffic. We are currently rebooting the Virtual Chassis to try and restore service.

    Update: The Virtual Chassis is back online and all alerts have cleared.

    Update: BGP is still causing network connectivity issues and we are still investigating.

    Update: We have identified the root cause of this and taken action to mitigate it. The network is now stable and we are continue to investigate exactly what happened.

    Update: The network has remained stables since mitigations were out in place at 10:48.

  • Date - 24/11/2022 10:05 - 24/11/2022 10:48
  • Last Updated - 24/11/2022 11:09
DC12 migration (Resolved)
  • Priority - Medium
  • Affecting System - DC12 web hosting, ULTRA hosting, ULTRA Reseller and virtual server services,
  • Over the past 12 months, we have been hard at work behind the scenes to bring together the largest programme of core infrastructure upgrades that we've ever undertaken here at Freethought in our nearly 20 years of business.
     
    As part of this work we shall be moving into our very own private suite in the North London datacenter that has been home to lots of our servers for the past 10 years.
     
    The first phase of this migration is to move all of our web hosting, ULTRA hosting, ULTRA Reseller and virtual server services, which we will be doing on 12/11/2022.
     
    We will start moving web hosting, ULTRA hosting andULTRA Reseller servers at 20:00. We estimate that it will take approximately 15 minutes per server.
     
    We will start moving the virtual server nodes at 21:00. Due to the time to shut down and start all of the virtual machines, this may take up to an hour.
     
    Whilst we do this, the affected services will be offline whilst we power the server down, take it out of the rack and walk (don't run) very carefully to the other end of the building to put it into the racks in our nice new shiny bright blue private suite and plug it into our insanely fast brand new core network.
     
    This work only impacts virtual servers in London and not Manchester based servers.

  • Date - 12/11/2022 20:00 - 13/11/2022 04:00
  • Last Updated - 15/11/2022 14:49
Border router capacity upgrade (London) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • On Monday the 3rd of October starting at 13:00 BST, we will be upgrading our THN-RT1 core router located in Telehouse North in London with new hardware. This upgrade will involve us swapping out the some modules in the hardware for higher capacity versions and so we will be shutting down all of our peering and transit connections that are connected to that router for the duration of the work.

    In addition, we will be moving over to a higher capacity backhaul ring between Iomart DC12 (formerly LDeX1), Iomart DC13 (formerly LDeX2) and Telehouse North as part of this work. The new backhaul ring does not include Equinix MA1.
    The new backhaul ring is already connected and working, but not yet carrying any traffic. The old backhaul ring will continue to be available after the migration in case of any issues.

    Once the router has been upgraded, we will also be taking the opportunity to introduce a new Tier 1 transit connectivity provider as well as upgrade the capacity of the connections to our existing transit providers and peering exchanges.

    Due to the nature of the work being undertaken, the network should be considered at-risk for the duration of the maintenance window. Our network is highly redundant and we have significant capacity elsewhere to be able to carry the normal traffic volume whilst the THN-RT1 router is unavailable, so we are not expecting any disruption to services, although you may see increased latency.

    Update (03/10/2022 13:00): Unfortunately, due to issues encountered with the work in Manchester, we will be late starting the work in London.

    Update (03/10/2022 14:35): Whilst we can't carry out most of the work that we had scheduled for today, we have been able to upgrade one of our IP transit connections in THN.

    Update (03/10/2022 15:32): We have cancelled some of the London work today and will be re-scheduling it for tomorrow pending resolution of the IP transit issues in Manchester. In the meantime, the network remains stable and working normally via the London router in THN.

    Update (03/10/2022 15:52): The new IP transit connection is online.

    Update (03/10/2022 23:20): We have upgraded our LONAP peering. No further work will be taking place today.

    Update (04/10/2022 19:58): All inter-datacentre traffic is now being carried on the new, high-capacity backhaul ring. The old backhaul ring remains in place as a backup.

    Update (12/10/2022 14:15): We have scheduled the remaining work for 14/10/2022 starting at 12:00. This will involve upgrading our connections to two peering exchanges as well as completing the hardware upgrade.

    Update (14/10/2022 12:13): We have taken the London router out of the network so that we can begin the remaining work.

    Update (14/10/2022 13:16): The hardware upgrade work has been completed and all connections have been re-patched. We are double checking that everything is working as expected before we start bringing things back up.

    Update (14/10/2022 17:45): All upgrade work has been completed. All transit and peering connections are online. Full redundancy has been restored. We will closely monitor the network to ensure that everything is working as designed.

  • Date - 03/10/2022 13:00 - 03/10/2022 15:59
  • Last Updated - 12/11/2022 11:18
Border router capacity upgrade (Manchester) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • On Monday the 3rd of October starting at 13:00 BST, we will be physically relocating the as well as upgrading the hardware in our EQMA1-RT1 core router in Manchester.

    The router will be moved from the Equinix MA1 data centre on the Manchester Science Park to the Iomart DC13 data centre at Media City.

    This upgrade will involve us swapping out the some modules in the hardware for higher capacity versions and so we will be shutting down all of our peering and transit connections that are connected to that router for the duration of the work.

    In addition, we will be moving over to a higher capacity backhaul ring between Iomart DC12 (formerly LDeX1), Iomart DC13 (formerly LDeX2) and Telehouse North as part of this work. The new backhaul ring does not include Equinix MA1.
    The new backhaul ring is already connected and working, but not yet carrying any traffic. The old backhaul ring will continue to be available after the migration in case of any issues.

    Once the router has been upgraded, we will also be taking the opportunity to introduce a new Tier 1 transit connectivity provider as well as upgrade the capacity of the connections to our existing transit providers and peering exchanges.

    Due to the nature of the work being undertaken, the network should be considered at-risk for the duration of the maintenance window. Our network is highly redundant and we have significant capacity elsewhere to be able to carry the normal traffic volume whilst the EQMA1-RT1 router is unavailable, so we are not expecting any disruption to services, although you may see increased latency.

    Update (03/10/2022 09:08): We have successfully removed the EQMA1-RT1 router from the network and will now remove it from the rack and transport it to Iomart DC13.

    Update (03/10/2022 10:59): The router has been moved to Iomart DC13 and has been powered back up. We are patching in all of the connections.

    Update (03/10/2022 12:28): Patching is complete. We are double checking that everything is working before re-introducing the router to the network.

    Update (03/10/2022 12:55): Unfortunately we have found a problem with our IP transit connection which is preventing us from fully re-introducing the router to the network. We are working with our suppliers 

    Update (03/10/2022 13:29): Our LINX Manchester peering is back online.

    Update (03/10/2022 15:16): We are still working with both of our IP transit providers to bring up the services in Iomart DC13. In the meantime, the network remains stable and working normally via the London router in THN.

    Update (03/10/2022 15:44): We have been able to bring up one of our IP transit connections in Iomart DC13. We are continuing to work on the other one.

    Update (04/10/2022 11:48): Our second IP transit connection in Iomart DC13 is now online. Full redundancy has been restored to the network.

    Update (04/10/2022 19:58): All inter-datacentre traffic is now being carried on the new, high-capacity backhaul ring. The old backhaul ring remains in place as a backup.

    Update (05/10/2022 13:16): All work in Manchester has been completed and the network has remained stable for an extended period of time, so we are marking this as resolved.

  • Date - 03/10/2022 09:00 - 03/10/2022 11:59
  • Last Updated - 12/10/2022 14:14
Reboot to fix email issues (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel5
  • We will be performing a reboot of the LDeX1-cPanel5 server between 21:00 and 23:59 on 28/07/2022 in order to perform a Linux kernel update. This is to fix a CloudLinux/KernelCare bug which is affecting auto replies.

    Update (21:28): We are rebooting the server

    Update (21:34): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 28/07/2022 21:00 - 28/07/2022 23:59
  • Last Updated - 28/07/2022 21:38
Reboot to fix email issues (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel6
  • We will be performing a reboot of the LDeX1-cPanel6 server between 21:00 and 23:59 on 28/07/2022 in order to perform a Linux kernel update. This is to fix a CloudLinux/KernelCare bug which is affecting auto replies.

    Update (21:28): We are rebooting the server

    Update (21:32): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 28/07/2022 21:00 - 28/07/2022 23:59
  • Last Updated - 28/07/2022 21:38
Reboot to fix email issues (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel7
  • We will be performing a reboot of the LDeX1-cPanel7 server between 21:00 and 23:59 on 28/07/2022 in order to perform a Linux kernel update. This is to fix a CloudLinux/KernelCare bug which is affecting auto replies.

    Update (21:28): We are rebooting the server

    Update (21:32): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 28/07/2022 21:00 - 28/07/2022 23:59
  • Last Updated - 28/07/2022 21:37
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 22/07/2022 in order to resolve an issue which is preventing us from taking backups.

    Update (21:52): We're rebooting the LDeX1-Plesk5 server now

    Update (21:57): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 22/07/2022 21:00 - 22/07/2022 23:59
  • Last Updated - 22/07/2022 22:04
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 29/06/2022 in order to resolve an issue which is preventing us from taking backups.

  • Date - 29/06/2022 21:00 - 29/06/2022 23:59
  • Last Updated - 22/07/2022 11:03
Routine facility generator maintenance and black building test (Resolved)
  • Priority - Medium
  • Affecting System - Iomart DC12 (LDeX1)
  • We have been notified by Iomart that they will be carrying out remedial maintenance work on the generator at DC12 (formerly LDeX1) between 08:00 and 17:00 on 09/03/2022 as part of their planned preventative maintenance programme. This will be followed by a black building test to ensure that everything is operating correctly.

    The maintenance work that will be carried out in this maintenance window is as follows:

    1. Carry out the service and replace the PT pump Governer Needle Valve on the generator set, once this has been completed, the steps below will be carried out.
    2. Check the generator and all of the UPS units are operating fault free.
    3. Operate the changeover key switch on the main switch board.
    4. Check the generator has started and is running correctly with no faults.
    5. Check all UPS units and BMS are operating fault free.
    6. Let the generator run for approximately 60 minutes to ensure there are no issues.

    Whilst no aspect of this work is expected to be service affecting, as detailed above, at various points during this work the DC12 facility may be operating as a level of reduced redundancy and so all services hosted in DC12 should therefore be considered "at-risk" for the duration of this work.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

  • Date - 09/03/2022 08:00 - 09/03/2022 17:00
  • Last Updated - 29/06/2022 13:30
LDeX1-Plesk6 Outage (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-Plesk6
  • We are aware of a loss of service affecting LDeX1-Plesk6, we are investigating and working to restore service.

    29/03/22 11:27 - Service has been restored, we were forced to reboot the server hardware to restore service. We will continue to monitor the server for issues to ensure continued stability. 

  • Date - 29/03/2022 11:21 - 29/03/2022 11:27
  • Last Updated - 29/03/2022 11:55
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 02/03/2022 in order to resolve an issue which is preventing us from taking backups.

    Update (21:35): We are rebooting the server now.

    Update (21:48): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 02/03/2022 21:00 - 02/03/2022 23:59
  • Last Updated - 02/03/2022 21:50
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 17/01/2022 in order to resolve an issue which is preventing us from taking backups.

    Update (22:13): We are rebooting the server now.

    Update (22:17): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 17/01/2022 21:00 - 17/01/2022 23:59
  • Last Updated - 17/01/2022 22:18
Filesystem check on LDeX1-Plesk4 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be performing a precautionary filesystem check on the LDeX1-Plesk4 server on the 3rd of December between 21:00 and 23:59. This is a follow up on the database corruption issue experienced on the 2nd of December as a result of a failing SSD.

    This will require us to take all web, email and database services offline for the duration of the check, which we are hoping should be completed well within the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work has been re-scheduled for 04/12/2021 between 21:00 and 23:59

    Update: We are starting the filesystem check now.

    Update: The filesystem check has completed successfully without finding any problems. All services are functioning normally again, please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 04/12/2021 21:00 - 04/12/2021 23:59
  • Last Updated - 04/12/2021 21:56
MySQL/MariaDB corruption on LDeX1-Plesk4 (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk4
  • We are seeing corruption of MySQL/MariaDB databases on the LDeX1-Plesk4 server. One of the SSDs is showing errors and despite being in a RAID array this has caused corruption in MySQL/MariaDB. We are therefore restoring the databases from our backups taken at 15:01 today.

    The problematic SSD has been removed from the RAID array.

    Update (20:45): The restore has been completed and MySQL/MariaDB databases on LDeX1-Plesk4 are working normally again.Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 02/12/2021 19:01 - 02/12/2021 20:45
  • Last Updated - 02/12/2021 20:46
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 22/1/2021 in order to resolve an issue which is preventing us from taking backups.

    Update: The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 22/11/2021 21:00 - 22/11/2021 23:59
  • Last Updated - 22/11/2021 22:47
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 21:00 on 27/10/2021 and 09:00 on 28/10/2021, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 27/10/2021 21:00 - 28/10/2021 09:00
  • Last Updated - 16/11/2021 14:37
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 19/10/2021 in order to resolve an issue which is preventing us from taking backups.

    Update (22:06): We are rebooting the server now.

    Update (22:13): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 19/10/2021 21:00 - 19/10/2021 23:59
  • Last Updated - 19/10/2021 22:16
Customer portal upgrade (Resolved)
  • Priority - Medium
  • Affecting System - Customer portal
  • We will be upgrading the customer portal (portal.freethought.uk) on 13/10/2021 between 21:00 and 23:59, so all functions on the customer portal will be unavailable during this time, including the ability to raise tickets. If you need to get in contact with us for any reason whilst this upgrade is taking place, then you can email us on support@freethought.uk.

    This is essential maintenance to ensure the ongoing stability of the service as well as introduce new features. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (22:48): The upgrade has been completed and the customer portal is available normally again

  • Date - 13/10/2021 21:00 - 13/10/2021 23:59
  • Last Updated - 13/10/2021 22:49
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel7
  • We will be upgrading the version of MariaDB on the LDeX1-cPanel7 server from 10.2 to 10.5 on 23/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:49): We have begun the upgrade of MariaDB

    Update (22:04): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 23/09/2021 21:00 - 23/09/2021 23:59
  • Last Updated - 23/09/2021 22:07
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel6
  • We will be upgrading the version of MariaDB on the LDeX1-cPanel6 server from 10.2 to 10.5 on 23/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:49): We have begun the upgrade of MariaDB

    Update (22:04): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 23/09/2021 21:00 - 23/09/2021 23:59
  • Last Updated - 23/09/2021 22:07
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel5
  • We will be upgrading the version of MariaDB on the LDeX1-cpanel5 server from 10.2 to 10.5 on 23/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:49): We have begun the upgrade of MariaDB

    Update (22:04): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 23/09/2021 21:00 - 23/09/2021 23:59
  • Last Updated - 23/09/2021 22:07
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 22/09/2021. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (22:47): We are rebooting the LDeX1-Plesk6 server now.

    Update (23:47): We are rebooting the LDeX1-Plesk6 server a second time.

    Update (23:49): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 22/09/2021 21:00 - 22/09/2021 23:59
  • Last Updated - 22/09/2021 23:50
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk7
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk7 server from 10.2 to 10.5 on 22/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (23:30): We have begun the upgrade of MariaDB

    Update (23:41): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 22/09/2021 21:00 - 22/09/2021 23:59
  • Last Updated - 22/09/2021 23:42
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk6 server from 10.2 to 10.5 on 22/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (23:32): We have begun the upgrade of MariaDB.

    Update (23:36): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 22/09/2021 21:00 - 22/09/2021 23:59
  • Last Updated - 22/09/2021 23:37
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk8
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk8 server from 10.2 to 10.5 on 22/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:51): We have begun the upgrade of MariaDB.

    Update (22:00): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 22/09/2021 21:00 - 22/09/2021 23:59
  • Last Updated - 22/09/2021 22:06
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk5 server from 10.2 to 10.5 on 21/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (22:01): We have begun the upgrade of MariaDB.

    Update (22:09): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 21/09/2021 21:00 - 21/09/2021 23:59
  • Last Updated - 21/09/2021 22:10
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk3
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk3 server from 10.2 to 10.5 on 21/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:14): We have begun the upgrade of MariaDB.

    Update (21:41): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 21/09/2021 21:00 - 21/09/2021 23:59
  • Last Updated - 21/09/2021 21:43
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk4 server from 10.2 to 10.5 on 21/09/2021 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:21): We have begun the upgrade of MariaDB.

    Update (21:41): MariaDB has been upgraded successfully. All services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 21/09/2021 21:00 - 21/09/2021 23:59
  • Last Updated - 21/09/2021 21:42
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 09/09/2021 in order to resolve an issue which is preventing us from taking backups.

    Update (22:43): We are rebooting the server now.

    Update (22:50): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 09/09/2021 21:00 - 09/09/2021 23:59
  • Last Updated - 09/09/2021 23:00
MySQL on Plesk5 (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-Plesk5
  • We are aware of an issue affecting MySQL on Plesk5 and are working to resolve.

    UPDATE: 01:15 The issue has been resolved

  • Date - 09/07/2021 00:26 - 09/07/2021 01:15
  • Last Updated - 09/07/2021 08:37
Software update on LDeX1-VPS8 (Resolved)
  • Priority - Medium
  • Affecting System - Virtual servers hosted on LDeX1-VPS8
  • On Monday the 5th of July at 21:00 BST, we will be performing a software update on the LDeX1-VPS8 node. This update will install a new version of the Linux kernel as well as the Xen hypervisor and so will require us to reboot the server in order for these changes to take effect.

    We will perform a graceful shutdown of virtual servers when starting this work, but you are welcome to manually shut your virtual server down yourself ahead of the work if you prefer.

    We expect this work to take about half an hour to complete, but in case of any unforeseen problems we have scheduled a maintenance window from 21:00 to 23:59 BST. 

    If you have any questions about the migration, then please don't hesitate to get in touch with us by replying to this email or raising a ticket via the customer portal.

    Update (21:00): We are starting the maintenance work now. All virtual servers on LDeX1-VPS8 are shutting down.

    Update (22:01): The maintenance work has been completed successfully and all virtual servers are back online. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 05/07/2021 21:00 - 05/07/2021 23:59
  • Last Updated - 05/07/2021 22:02
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 26/04/2021 in order to resolve an issue which is preventing us from taking backups.

    Update (22:35): We are rebooting the server now.

    Update (22:42): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 26/04/2021 21:00 - 26/04/2021 23:59
  • Last Updated - 26/04/2021 23:06
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 21:00 on 24/04/2021 and 09:00 on 25/04/2021, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 24/04/2021 21:00 - 25/04/2021 09:00
  • Last Updated - 25/04/2021 09:15
LDeX1-VPS7 Unresponsive (Resolved)
  • Priority - Critical
  • Affecting Server - Virtualizor
  • We are aware that one of our virtualisation nodes is currently offline which has resulted in virtual servers on that node going offline. We are working on restoring service as quickly as possible and will post updates to this status message as we have them.

    Update (13:12): The LDeX1-VPS7 node rebooted unexpectedly. It is back online and virtual machines are starting up.

    Update (13:48): We have restored service for some virtual servers on this node and continue to investigate the underlying cause and restoring service to remaining customers.

    Update (14:38): We are rebooting the LDeX1-VPS7 server.

    Update (15:52): We are rebooting the LDeX1-VPS7 server again in order to change the version of the Xen hypervisor in use.

    Update (16:15): The LDeX1-VPS7 server has finished booting and most virtual servers are back online, but unfortunately 5 virtual servers still won't start. We are continuing to investigate why.

    Update (16:58): There seems to be a problem with the RAID array in the server. We are continuing to investigate.

    Update (18:31): We have identified a failed drive, this should not have impacted RAID in the way it did but we are working on replacing the failed drive.

    Update (21:44): We are continuing to investigate the failed storage and have installed the spare drives that were on-site. More updates will follow.

    Update (22:53): Unfortunately the replacement drive has not resolved the storage issue and we are still investigating the RAID issue in efforts to get the node back functioning.

    Update (00:19): Engineers onsite are containing to investigate the problematic RAID array. We are also implementing a replacement node for customers to be moved to as an alternative. Please check back for further updates as we have them.

    Update (07:37): Replacement hardware is installed and being configured, we continue to attempt to troubleshoot the problematic RAID array.

    Update (09:38): Despite our best efforts it has not been possible to recover the RAID array and so virtual servers hosted on the LDeX1-VPS7 node have become corrupt and unusable .
    We have brought new hardware online to provision fresh virtual servers and will be contacting customers shortly with the details.
    For customers with managed servers, we have begun restoring our backups and will have the servers back online as soon as possible.

    Update (11:56): Restores are progressing still and we will be in contact as quickly as possible as machines come online.

    Update (16:05): Restores are progressing and unmanaged machines are being provisioned.

    Update (20:15): All affected customers have been sent details of how to access their VMs. Some temporary work arounds are still in place for some customers and some additional works are to be carried out however services are restored.

  • Date - 26/03/2021 13:01
  • Last Updated - 19/04/2021 11:12
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 21:00 on 12/04/2021 and 09:00 on 13/04/2021, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 12/04/2021 21:00 - 13/04/2021 09:00
  • Last Updated - 19/04/2021 11:02
Network wide instability (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • We are currently investigating a network issue which is impacting all data centres.

    Update (13:08): We are seeing the network recover, however there are still periods of packet loss

    Update (13:24): We are continuing to see 25-30% packet loss across the network. We have raised the issue with our backhaul connectivity supplier to investigate.

    Update (13:31): Our backhaul connectivity supplier confirm that they are investigating a possible DDoS against another customer on their network.

    Update (13:40): The packet loss seems to be improving.

    Update (14:05): We are waiting for confirmation from our supplier that the DDoS attack has been mitigated, however we haven't seen any further packet loss across the backhaul network since 13:43. 

    Update (14:14): Our supplier have confirmed that this was due to a DDoS attack against another customer. They have temporarily disconnected the customer in question whilst they evaluate how best to proceed. In the mean time, the network is stable and functioning normally.

    Update (15:46): We just saw another brief period of packet loss between 15:43 and 15:45.

    Update (17:08): Our supplier has confirmed that the previous instability was due to them trying to reintroduce some service for the customer being targeted, however they have disconnected them again due to the ongoing attacks.

  • Date - 07/04/2021 12:57 - 07/04/2021 13:43
  • Last Updated - 07/04/2021 17:09
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 03/04/2021 in order to resolve an issue which is preventing us from taking backups.

    Update (21:26): We are rebooting the server now.

    Update (21:32): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 03/04/2021 21:00 - 03/04/2021 23:59
  • Last Updated - 03/04/2021 21:37
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 11/03/2021. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (22:10): We are rebooting the server now.

    Update (22:12): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 11/03/2021 21:00 - 11/03/2021 23:59
  • Last Updated - 11/03/2021 22:23
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel6
  • We will be performing a reboot of the LDeX1-cPanel6 server between 21:00 and 23:59 on 02/03/2021 in order to resolve an issue which is preventing us from taking backups.

    Update (21:27): We are rebooting the server now.

    Update (21:37): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 02/03/2021 21:00 - 02/03/2021 23:59
  • Last Updated - 02/03/2021 21:38
Reboot to resolve RAID card issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be performing a reboot of the LDeX1-Plesk6 server between 21:00 and 23:59 on 30/09/2020 in order to resolve an issue with the RAID card. There will be a loss of service for a few minutes whilst the server reboots.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:05): The server is now rebooting.

    Update (21:09): The server is back online, however this has not resolved the issue with the RAID card.

    Update (21:15): We are going to try shutting the server down and then starting it back up again.

    Update (21:18): This has not resolved the issue either. We are going to take the opportunity to apply some outstanding Windows updates whilst we consider our options.

    Update (21:51): The Windows updates have finished installing and we are rebooting the server.

    Update (22:05): The server is finally going down for the reboot.

    Update (22:07): The server is back online again.

    Update (22:24): We're going to leave this for now and continue at 21:00 tomorrow night (01/10/2020).

    Update (01/10/2020 22:21): We are shutting the server down again, this time to physically disconnect both power supplies simultaneously and ensure that the backplane and RAID controller are power cycled and start back up from a cold boot.

    Update (22:38): The server is back online again, however unfortunately this has not resolved the issue. We will follow this up with the vendor before scheduling any further work.

    Update: It turns out that the issue wasn't with the RAID card or backplane at all, but with an SSD which appeared to work fine in other servers but not in the LDeX1-Plesk6 server. We have replaced the SSD in question and the RAID array has rebuilt.

  • Date - 30/09/2020 21:00 - 30/09/2020 23:59
  • Last Updated - 23/11/2020 22:26
Migration to LDeX1-cPanel7 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel3
  • We are migrating customers from LDeX1-cPanel3 to LDeX1-cPanel7 starting at 21:00 on 20/11/2020. Full details of the migration can be found in the email sent to customers on 17/11/2020, however please contact us if you have any questions.

    Update (01:35): The migration has been completed successfully and all services are back online. Please let us know if you are experiencing any problems.

  • Date - 20/11/2020 21:00 - 20/11/2020 01:35
  • Last Updated - 21/11/2020 01:36
Migration to LDeX1-cPanel7 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel2
  • We are migrating customers from LDeX1-cPanel2 to LDeX1-cPanel7 starting at 21:00 on 20/11/2020. Full details of the migration can be found in the email sent to customers on 17/11/2020, however please contact us if you have any questions.

    Update (01:35): The migration has been completed successfully and all services are back online. Please let us know if you are experiencing any problems.

  • Date - 20/11/2020 21:00 - 20/11/2020 01:35
  • Last Updated - 21/11/2020 01:36
Migration to LDeX1-cPanel5 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We are migrating customers from LDeX1-cPanel1 to LDeX1-cPanel5 starting at 21:00 on 13/11/2020. Full details of the migration can be found in the email sent to customers on 10/11/2020, however please contact us if you have any questions.

    Update (02:07): The migration has been completed successfully and all services are back online. Please let us know if you are experiencing any problems.

  • Date - 13/11/2020 21:00 - 14/11/2020 02:07
  • Last Updated - 20/11/2020 20:40
Migration to LDeX1-cPanel6 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel4
  • We are migrating customers from LDeX1-cPanel4 to LDeX1-cPanel6 starting at 21:00 on 06/11/2020. Full details of the migration can be found in the email sent to customers on 02/11/2020, however please contact us if you have any questions.

    Update (02:47): The migration has been completed successfully and all services are back online. Please let us know if you are experiencing any problems.

  • Date - 06/11/2020 21:00 - 06/11/2020 07:00
  • Last Updated - 13/11/2020 20:52
DDoS attack (Resolved)
  • Priority - High
  • Affecting System - AS41000 Network
  • We are aware of a targeted DDoS attack against one of our customers that is impacting wider network connectivity, we're working on mitigating the impact as quickly as possible. 

    Update (1051): We have taken steps to mitigate the attack and connectivity is now stable across our network. We'll monitor the situation to ensure there are no further issues.

    Update (20:47): We haven't see any further issues, so we believe this is resolved as of 10:51, however we will continue to monitor the network closely.

  • Date - 06/11/2020 10:45 - 06/11/2020 10:51
  • Last Updated - 06/11/2020 20:48
Emergency Reboot (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-Plesk5
  • LDeX1-Plesk5 has become unresponsive which has necessitated an emergency reboot. Service should be restored soon.

  • Date - 22/10/2020 11:34 - 22/10/2020 11:40
  • Last Updated - 22/10/2020 11:40
EQMA1-RT1 router unreachable (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • We are investigating alerts from our monitoring system that the EQMA1-RT1 router has become unreachable. The rest of the network appears to have quickly routed around this and is functioning normally. Any traffic entering or leaving the network via Manchester will have seen some disruption as it was re-routed via London.

    Update (06:33): It seems that the line card on the router has crashed and rebooted.

    Update (06:39): The line card has finished booting and traffic is flowing normally again. There was some further disruption whilst diverted traffic routed back through Manchester. We are attempting to determine the cause of the line card crash.

    Update (07:25): We have seen some further errors in the logs which may be related and are continuing to investigate.

    Update (08:12): We are looking at whether we need to remove the router from the network and perform an emergency reboot and/or software update.

    Update (10:09): We have removed the EQMA1-RT1 router from the network and will be performing an emergency reboot shortly. This should not be disruptive to traffic as everything has been routed away from it in advance.
    Due to the reduced redundancy, the network should be considered "at risk" whilst we are undertaking this work.

    Update (11:25): Initial indications are that the reboot appears to have resolved the issue. Given that we had the router safely removed from the network, we have decided to also install some outstanding software updates.

    Update (12:04): The software updates have all been installed successfully and following a reboot in order to apply these updates, everything appears to be functioning normally. We have therefore inserted the router back to the network and as such full redundancy has been restored to the network. We will continue to monitor the router closely for any sign of further problems.

    Update (29/09/2020 17:06): We haven't seen any further problems with this router, so we believe that this issue is resolved.

  • Date - 25/09/2020 06:27 - 25/09/2020 06:39
  • Last Updated - 29/09/2020 17:07
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing a reboot of the LDeX1-cPanel1 server between 21:00 and 23:59 on 23/09/2020 in order to resolve an issue which is preventing us from taking backups.

    Update (21:03): We're now rebooting the server

    Update (21:07): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 23/09/2020 21:00 - 23/09/2020 23:59
  • Last Updated - 23/09/2020 21:11
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 02:00 and 06:00 on 11/082020, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The provider in question has informed us that they have re-scheduled this work for 02:00 to 06:00 BST on 25/08/2020.

    Update: The provider in question has once again re-scheduled this work, this time for 02:00 to 06:00 BST on 08/09/2020.

    Update: The provider in question has once again delayed this work, although they have not given a new date for it to go ahead. We will mark this as resolved until we have a new date.

  • Date - 08/09/2020 02:00 - 08/09/2020 06:00
  • Last Updated - 11/09/2020 15:26
Firewall firmware update (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 and LDeX2 firewall clusters
  • We will be performing a firmware update on the firewall clusters in LDeX1 and LDeX2 between 21:00 and 23:59 on 05/08/2020.

    This will affect all web hosting, reseller hosting and ULTRA hosting services, as well as dedicated server and co-location customers using our managed firewall services.
    Virtual server customers and as well as dedicated server and co-location customers using our managed firewall services will not be affected.

    Each cluster member will be updated and restarted before being returned to the cluster. As such, we expect that disruption from this work should be minimal - just a few seconds during the failover process.

    This is essential maintenance which is necessary in order to ensure the ongoing stability and security of this firewalls. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-scheduled for 06/08/2020 between 21:00 and 23:59.

    Update: Due to the DDoS attacks today, this work has been re-scheduled for 10/08/2020 between 21:00 and 23:59.

    Update: This work has been re-scheduled for 18/08/2020 between 21:00 and 23:59.

    Update (21:22): We are beginning the upgrade on the LDeX1-FWCL1 firewall cluster.

    Update (21:33): The LDeX1-FWCL1 firewall cluster has been successfully upgraded. We are beginning the upgrade on the LDeX1-FWCL2 firewall cluster.

    Update (21:46): The LDeX1-FWCL2 firewall cluster has been successfully upgraded. We are beginning the upgrade on the LDeX1-FWCL3 firewall cluster.

    Update (21:56): The LDeX1-FWCL3 firewall cluster has been successfully upgraded. We are beginning the upgrade on the LDeX2-FWCL1 firewall cluster.

    Update (22:06): The LDeX2-FWCL1 firewall cluster has been successfully upgraded. All firewalls are now running the latest firmware version. Our monitoring did not detected any interruption to services, but please get in touch with our helpdesk in the usual manner if you are experiencing any problems.

  • Date - 18/08/2020 21:00 - 18/08/2020 23:59
  • Last Updated - 18/08/2020 22:08
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel2
  • We will be performing a reboot of the LDeX1-cPanel2 server between 21:00 and 23:59 on 18/08/2020 in order to resolve an issue which is preventing us from taking backups.

    Update (21:15): The server is rebooting

     

    Update (21:16): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 18/08/2020 21:00 - 18/08/2020 23:59
  • Last Updated - 18/08/2020 21:18
Reduced peering capacity (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • LINX will be performing maintenance work on our LON2 connection between 00:00 and 06:00 on 11/08/2029. We will therefore be disabling all peering over LON2 whilst this work is carried out.

    During this period, all traffic will be routed via our other peering connections on LINX LON1, LINX Manchester and LONAP as well as our three IP transit providers and so this will not be service affecting.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: LINX completed their maintenance work successfully and full peering capacity has been restored to the network.

  • Date - 11/08/2020 00:00 - 11/08/2020 06:00
  • Last Updated - 13/08/2020 10:29
Network Disruption (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • We are aware of a network issue impacting a large proportion of network traffic. We are investigating the cause and working to resolve the issue as quickly as possible. We suspect this is a repeat attack against our network.

    Update (6/8/20 @ 2052): We have confirmed this issue is indeed an attack against our network. It has been mitigated for the time being and service has been restored. We of course will continue to monitor the situation closely to attempt to ensure network stability going forward.

  • Date - 06/08/2020 20:19 - 06/08/2020 20:44
  • Last Updated - 06/08/2020 21:18
DDoS attack (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • An attack was detected against our network at around 13:29 which may have caused some brief disruption, we mitigated the attack at around 13:36 and continue to monitor the situation.

    Update: The attack appears to have subsided at 14:04

  • Date - 06/08/2020 13:29 - 06/08/2020 14:04
  • Last Updated - 06/08/2020 15:37
Routine facility LV distribution maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX2
  • We have been notified by iomart that they will be carrying out annual maintenance work on the LV distribution board in the LDeX2 facility between 09:00 and 18:00 on 29/07/2020 as part of their planned preventative maintenance programme.

    The maintenance work which will be carried out during this window is as follows:

    1. The generator circuit breakers will be inspected and maintained first whilst the facility is still operating on utility mains power as normal.
    2. The facility will then be transferred onto generator power and the utility mains circuit breaker will be serviced and tested.
    3. Whilst still on generator power, the A side UPS will be put into external bypass to allow for the UPS circuit breakers to be maintained. Once complete the A side UPS will be restored before repeating with the B side UPS.
    4. The facility will then be restored back on to utility mains power supply and the generators will be put back into auto standby.
    5. The air-conditioning circuit breakers will then be tested and maintained one at a time in order to ensure resilience of the redundant cooling capacity.

    Whilst no aspect of this work is expected to be service affecting, as detailed above, at various points during this work the LDeX2 facility may be operating as a level of reduced redundancy and so all services hosted in LDeX2 should therefore be considered "at-risk" for the duration of this work.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 29/07/2020 09:00 - 29/07/2020 18:00
  • Last Updated - 31/07/2020 19:05
RAID card firmware and driver update (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk6
  • The manufacturer of the RAID card in the LDeX1-Plesk6 server have advised us that we need to update the firmware and drivers for the RAID card as part of troubleshooting an issue. We will complete this work between 21:00 and 23:59 on 09/07/2020. This will require us to reboot the server down briefly, so there will be a loss of service whilst this is done. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:15): We have started the update process.

    Update (21:21): The firmware update has completed. We are now updating the drivers.

    Update (21:39): The driver update has completed. We are now rebooting the LDeX1-Plesk6 server so that the updates take effect.

    Update (22:04): We are performing a second reboot of the LDeX1-Plesk6 server.

    Update (22:07): The second reboot has been completed and the server is back online after 3 minutes of downtime.

  • Date - 09/07/2020 21:00 - 09/07/2020 23:59
  • Last Updated - 09/07/2020 22:09
Additional disk space for LDeX1-cPanel3 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel3
  • We will be  performing scheduled maintenance to increase the disk space for the /home partition on the LDeX1-cPanel3 server on 20/06/2020 between 21:00 and 23:59. This will require us to shut the server down briefly, so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:56): We are shutting the server down now

    Update (22:26): The /home partition has been successfully resized and the server is starting back up.

    Update (22:32): The server is back online and all services are working normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 20/06/2020 21:00 - 20/06/2020 23:59
  • Last Updated - 20/06/2020 22:36
Additional disk space for LDeX1-cPanel2 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel2
  • We will be  performing scheduled maintenance to increase the disk space for the /home partition on the LDeX1-cPanel2 server on 20/06/2020 between 21:00 and 23:59. This will require us to shut the server down briefly, so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:56): We are shutting the server down now

    Update (22:24): The /home partition has been successfully resized and the server is starting back up.

    Update (22:26): The server is back online and all services are working normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 20/06/2020 21:00 - 20/06/2020 23:59
  • Last Updated - 20/06/2020 22:28
Additional disk space for LDeX1-cPanel1 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be  performing scheduled maintenance to increase the disk space for the /home partition on the LDeX1-cPanel1 server on 20/06/2020 between 21:00 and 23:59. This will require us to shut the server down briefly, so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:10): We are shutting the server down now

    Update (21:55): We have run into some unexpected complications whilst carrying out this work which have slowed us down, but we're making progress and hope to have the server back online shortly.

    Update (22:10): The /home partition has been successfully resized and the server is starting back up.

    Update (22:13): The server is back online and all services are working normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 20/06/2020 21:00 - 20/06/2020 23:59
  • Last Updated - 20/06/2020 22:17
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing a reboot of the LDeX1-Plesk5 server between 21:00 and 23:59 on 20/06/2020 in order to resolve an issue which is preventing us from taking backups.

    Update (21:24): The server is rebooting

    Update (21:28): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 20/06/2020 21:00 - 20/06/2020 23:59
  • Last Updated - 20/06/2020 21:29
Network disruption (Resolved)
  • Priority - High
  • Affecting Other - AS41000
  • We are currently investigating disruption to our network which we suspect may be attributable to a DDoS attack against a customer. We'll update this ticket once we've more information and mitigated the issue.

    Update: We have identified the nature of the outage and it is indeed a malicious attack. We have taken steps to mitigate the attack which has restored stability to the network. We'll continue to monitor the situation.

    Update: The last attack traffic was seen at 12:39, however we will continue to monitor in case of any further attacks. Please accept our apologies for the inconvenience caused.

  • Date - 21/05/2020 12:18 - 21/05/2020 12:29
  • Last Updated - 24/05/2020 20:52
Network disruption (Resolved)
  • Priority - High
  • Affecting System - AS41000 Network
  • We are aware of packet loss currently impacting our network, this appears to be a DDoS attack directed at a customer. We are working to mitigate the attack as quickly as possible and will update this ticket with information as it becomes available.

    Update: Mitigations appear to have been effective and the attack has now subsided. We will continue to monitor in case of any further attacks. Please accept our apologies for the inconvenience caused.

  • Date - 24/05/2020 19:56 - 24/05/2020 20:01
  • Last Updated - 24/05/2020 20:49
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be  performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 12/05/2020. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

  • Date - 13/05/2020 21:00 - 13/05/2020 23:59
  • Last Updated - 14/05/2020 00:12
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on their equipment in LDeX1, LDeX2 and THN between 22:00 on 28/02/2020 and 04:00 on 01/07/2018 during which time they will be reconfiguring BGP on six switches as part of work to remove legacy infrastructure from their core network.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 28/02/2020 22:00 - 29/02/2020 04:00
  • Last Updated - 08/05/2020 15:13
UPS installation work (Resolved)
  • Priority - Medium
  • Affecting Other - LDeX2 data centre
  • We have been notified by London Data eXchange that they will be carrying out further work to increase the UPS capacity at the LDeX2 facility between 09:00 and 17:00 on 16/03/2020 to 18/03/2020.

    This work will involve a period where each of the UPS systems needs to placed into bypass whilst the new modules are brought into service and as such the LDeX2 facility will be operating as a level of reduced redundancy for some periods whilst this work is being carried out. As such, all services hosted in LDeX2 should be considered "at-risk" for the duration of this work.

    Whilst in bypass, any equipment connected to the feed supplied by that UPS unit will be running on generator power and as such should be considered at-risk in case there is an issue with the generators. Utility mains power will remain available throughout the maintenance work as a backup if required.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 16/03/2020 09:00 - 18/03/2020 17:00
  • Last Updated - 08/05/2020 15:13
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be  performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 26/02/2020. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: We have begun the maintenance work.

    Update: The LDeX1-Plesk6 server is now rebooting .

    Update: We are performing a second reboot of the LDeX1-Plesk6 server.

    Update: The server has been rebooted and all services are functioning normally again after approximately 1 minutes of downtime for each of the two reboots  Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 26/02/2020 21:00 - 26/02/2020 23:59
  • Last Updated - 26/02/2020 22:30
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 02:00 and 06:00 on 20/02/2020, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 20/02/2020 02:00 - 20/02/2020 06:00
  • Last Updated - 26/02/2020 14:56
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 01:00 and 06:00 on 21/02/2020, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 21/02/2020 01:00 - 21/02/2020 06:00
  • Last Updated - 26/02/2020 14:56
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk5
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk5 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk6 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

  • Date - 19/02/2020 21:00 - 19/02/2020 23:59
  • Last Updated - 19/02/2020 23:03
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk6 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk6 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

  • Date - 19/02/2020 21:00 - 19/02/2020 23:59
  • Last Updated - 19/02/2020 23:03
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk4 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk6 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

  • Date - 19/02/2020 21:00 - 19/02/2020 23:59
  • Last Updated - 19/02/2020 22:39
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk3
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk3 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk6 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

  • Date - 19/02/2020 21:00 - 19/02/2020 23:59
  • Last Updated - 19/02/2020 22:39
Border router software update (THN-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our THN-RT1 border router in Telehouse North on 12/02/2020 starting at 20:00.

    This update will require us to reboot the THN-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the THN-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We are beginning this maintenance work.

    Update: The upgrade has been completed successfully and the THN-RT1 router has been returned to the network. Full redundancy has been restored to the network. 

  • Date - 12/02/2020 20:00 - 13/02/2020 02:00
  • Last Updated - 12/02/2020 22:33
Border router software update (EQMA1-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our EQMA1-RT1 border router in Equinix MA1 (formerly Telecity Williams House/Kilburn House) on 05/02/2020 starting at 20:00.

    This update will require us to reboot the EQMA1-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the EQMA1-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We are beginning this maintenance work.

    Update: Unfortunately, whilst removing the EQMA1-RT1 router from the network there was a network wide loss of service between 20:17 and 20:32 . We have rolled the changes back and are currently assessing the cause.

    Update: We have determined that the cause of the issue was a combination of a configuration mistake and an inadvertent rollback of the configuration on one network device. We are correcting this and attempting to remove the EQMA1-RT1 router from the network again.

    Update: The EQMA1-RT1 router has been successfully removed from the network and the software update is now in progress.

    Update: The upgrade has been completed successfully and the EQMA1-RT1 router has been returned to the network. Full redundancy has been restored to the network. Please accept our apologies for the unexpected disruption earlier.

  • Date - 05/02/2020 20:00 - 06/02/2020 02:00
  • Last Updated - 05/02/2020 22:26
Possible power loss in Manchester / LDeX2 (Resolved)
  • Priority - Critical
  • Affecting Other - LDeX2 - Manchester
  • We are currently investigating possible power issues in our Manchester data centre, LDeX2. 

    Update - 12:08: It has been confirmed that there was an interruption to power in LDeX2 at around 11:52 which should be resolved. We are investigating and confirming if all systems are operational again.

    Update - 12:30: We have confirmed that power has been restored. This issue impacted only a single power feed within the data centre. If you are experiencing problems please contact support.

  • Date - 03/02/2020 11:52 - 03/02/2020 00:30
  • Last Updated - 03/02/2020 12:33
Border router software update (THN-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our THN-RT1 border router in Telehouse North on 13/06/2019 starting at 20:00.

    This update will require us to reboot the THN-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the THN-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The software update has been completed successfully and the THN-RT1 router has been returned to normal service. Full redundancy has been restored to the network.

  • Date - 13/06/2019 20:00 - 14/06/2019 02:00
  • Last Updated - 28/01/2020 17:45
Emergency security update (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk6
  • We will be performing emergency software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 14/01/2019. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance in order to ensure the ongoing security of this server in response to a critical security flaw in the Windows cryptography API discovered by the NSA. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The server has been rebooted and all services are functioning normally again after approximately 3 minutes of downtime across two reboots. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 14/01/2020 21:00 - 14/01/2020 23:59
  • Last Updated - 14/01/2020 22:28
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX2 UPS
  • We have been notified by London Data eXchange that they will be carrying out work on the UPS in the LDeX2 facility between 09:00 and 17:00 on 08/01/2020 as part of their planned preventative maintenance programme.

    This work will involve a firmware update, visual inspection of the components in each of the UPS systems, functional testing and cleaning of the fans and as such the LDeX2 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX2 should be considered "at-risk" for the duration of this work.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously. During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode. This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed. Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 08/01/2020 09:00 - 08/01/2020 17:00
  • Last Updated - 14/01/2020 20:34
UPS installation work (Resolved)
  • Priority - Medium
  • Affecting Other - LDeX2 data centre
  • We have been notified by London Data eXchange that they will be carrying out further work to increase the UPS capacity at the LDeX2 facility between 09:00 and 17:00 on 28/11/2019 and 29/11/2019.

    This work will involve a period where each of the UPS systems needs to placed into bypass whilst the new modules are brought into service and as such the LDeX2 facility will be operating as a level of reduced redundancy for some periods whilst this work is being carried out. As such, all services hosted in LDeX2 should be considered "at-risk" for the duration of this work.

    Whilst in bypass, any equipment connected to the feed supplied by that UPS unit will be running on generator power and as such should be considered at-risk in case there is an issue with the generators. Utility mains power will remain available throughout the maintenance work as a backup if required.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 28/11/2019 09:00 - 29/11/2019 17:00
  • Last Updated - 02/01/2020 16:48
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel4
  • Our monitoring has alerted us to a problem with the LDeX1-cPanel4 server. We are currently unable to access the server remotely or via the local console, so we are performing an emergency reboot.

    Update (10:09):  The server is back online again and all services are running normally. We will investigate further to see if we can establish what cause the server to lock up and we will closely monitor in case of any further issues. Please accept our apologies for the disruption and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 29/11/2019 09:57 - 29/11/2019 10:09
  • Last Updated - 29/11/2019 10:10
DDoS Attack (Resolved)
  • Priority - High
  • Affecting System - AS41000 Network
  • We are aware of intermitted availability issues of services on our network. The root cause appears to be a DDoS attack directed at one of our customers, we're taking steps to mitigate the problem.

    Update: The attack has been mitigated for the time being, we are continuing to monitor the situation and work with the client who was the target of the attack.

  • Date - 23/11/2019 15:09
  • Last Updated - 25/11/2019 11:26
Hardware upgrade (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We will be performing essential maintenance work starting at 21:00 on 06/11/2019 in order to migrate customers from the existing server LDeX1-Plesk1 to the brand new LDeX1-Plesk5 server as part of a major hardware upgrade. This migration will take several hours, during which time all web site and email services will be unavailable.

    Customers using the shared IP address on LDeX1-Plesk1 (194.110.243.11) will change to 194.110.243.196 as part of the migration. Customers using dedicated IP addresses won't see any change as we will move these over as part of the migration.
    Customers using our name servers will have these new IP addresses applied automatically, but customers using third party name servers will need to manually update their DNS records. If you don't know whether you are affected by this, please get in touch and we'll be happy to take a look for you. 

    We will forward all traffic from the old to the new IP address for a few weeks after the migration in order to give everyone sufficient time to make the change.

    Please accept our apologies for any inconvenience that this will cause and feel free to get in touch with our support staff via the usual means if you have any questions or concerns.

    6/11/19 21:03 - We have commenced the upgrade work, we'll post updates to this status periodically as the work progresses.

    7/11/19 01:21 - The migration to the new server is continuing. We are monitoring progress, at the moment the transfer is taking longer than originally estimated but we will continue to monitor the situation.

    7/11/19 07:25 - Unfortunately this migration attempt has been taken an extremely long time to complete for reasons we are still investigating. This means however that it would not have complete within the advises time window and so for now we have cancelled the migration and put the original Plesk1 server back in operation. We will email shortly with another window to complete this process.

    8/11/19 20:29 - We have once again commenced the upgrade work. As before, we'll post updates to this status periodically as the work progresses.

    9/11/19 09:07 - There are just 10 accounts left to migrate and we're cleaning up a few minor issues. We're hopeful that everything will be finished soon.

    9/11/19 11:05 - We have been investigating why these last few accounts are taking so long to copy and have found a potential issue with having a large number of dedicated IPv6 addresses, so are making some changes which we hope will speed things up.

    9/11/19 12:45 - Unfortunately we have had to restart the last few migrations as they had hung. Hopefully with the changes that we made they will copy much quicker this time.

    9/11/19 17:35 - We are still working on problematic migrations for three accounts, but as everything else has migrated over we have put the new server live. Inbound email is now being routed to the new server and DNS has been updated to point to the new IP address, so website using our name servers should be loading.

    9/11/19 18:28 - The old 194.110.243.11 IP address has been pointed to the new server, so websites using third party name servers should also be loading. All services should be working normally again, please accept our apologies for the disruption as a result of the unexpected extended time taken to complete the migration and don't hesitate to let us know if you are still experiencing any problems with your service.

    We have been monitoring the new server since the completion of migration work on 9th November and have concluded it is stable and operating normally. This issue is now resolved and the migration is considered completed. 

  • Date - 06/11/2019 21:00 - 07/11/2019 09:00
  • Last Updated - 14/11/2019 09:31
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk3 server from 10.1 to 10.2 on 03/11/2019 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    6/11/19 22:04 - Software maintenance work completed successfully. 

  • Date - 06/11/2019 21:00 - 06/11/2019 23:59
  • Last Updated - 06/11/2019 22:07
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk3
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk3 server from 10.1 to 10.2 on 03/11/2019 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    6/11/19 22:04 - Software maintenance work completed successfully. 

  • Date - 06/11/2019 21:00 - 06/11/2019 23:59
  • Last Updated - 06/11/2019 22:04
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 01:00 and 05:00 on 09/10/2019, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 09/10/2019 01:00 - 09/10/2019 05:00
  • Last Updated - 05/11/2019 10:27
UPS installation work (Resolved)
  • Priority - Medium
  • Affecting Other - LDeX2 data centre
  • We have been notified by London Data eXchange that they will be carrying out installation work to increase the UPS capacity at the LDeX2 facility between 09:00 and 17:00 on 14/10/2019, 15/10/2019 and 16/10/2019.

    This work will involve a period where each of the UPS systems needs to placed into bypass whilst the new modules are brought into service and as such the LDeX2 facility will be operating as a level of reduced redundancy for some periods whilst this work is being carried out. As such, all services hosted in LDeX2 should be considered "at-risk" for the duration of this work.

    Whilst in bypass, any equipment connected to the feed supplied by that UPS unit will be running on generator power and as such should be considered at-risk in case there is an issue with the generators. Utility mains power will remain available throughout the maintenance work as a backup if required.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 14/10/2019 09:00 - 16/10/2019 17:00
  • Last Updated - 05/11/2019 10:26
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel4
  • The LDeX1-cPanel4 server is currently not responding remotely and we are unable to login on the local console, so we are performing an emergency reboot.

    Update: The server is back online again and all services are running normally. We will investigate further to see if we can establish what cause the server to lock up and we will closely monitor in case of any further issues. Please accept our apologies for the distruption and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 03/11/2019 17:51 - 03/11/2019 18:12
  • Last Updated - 03/11/2019 18:23
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 21:00 and 23:59 on 17/10/2019 in order to resolve an issue which is preventing us from taking backups.

    Update (22:12): The server is rebooting

    Update (22:23): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 17/10/2019 21:00 - 17/10/2019 23:59
  • Last Updated - 17/10/2019 22:33
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk6
  • We are currently investigating an issue with the LDeX1-Plesk6 server which is unresponsive.

    Update: There seems to be a problem with the RAID controller or RAID array on the server which is preventing it from booting into Windows.

    Update: After investigating, we have determined that the quickest way to get the server back online is to restore from the last good backup. This is now in progress.

    Update: We're currently estimating that the restore will finish about 20:00.

    Update: The restore is approximately 25% completed.

    Update: The restore is approximately 60% completed after 4 hours.

    Update: The restore has completed and the server has booted in Windows. Email is functioning normally, but web sites are currently still offline.

    Update: Web sites are back online and so we believe that normal service has resumed. We are continuing to check that everything is working correctly. We will email all customers with full details of what happened tomorrow as well as follow up with the hardware vendor to investigate the cause of these problems.

  • Date - 06/10/2019 12:42 - 06/10/2019 21:04
  • Last Updated - 06/10/2019 21:11
Reboot of LDeX1-Plesk4 (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be rebooting the LDeX1-Plesk4 server between 21:00 and 23:59 on 05/09/2019 in order to resolve an issue which is preventing us from configuring new IP addresses on the server.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This issue was the result of a broken CloudLInux package update, which has now been resolved without requiring a reboot. This work has therefore been cancelled.

  • Date - 05/09/2019 21:00 - 05/09/2019 23:59
  • Last Updated - 05/09/2019 10:31
Default page for websites on LDeX1-Plesk1 (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We are aware of an issue with websites hosted on the LDeX1-Plesk1 server displaying the default page. The server is currently regenerating configuration for all web sites and once this is complete normal service should be resumed. Please accept our apologies for the inconvenience.

    Update (16:50):  The server has successfully finished regenerating the configuration and normal service has resumed again. Please accept our apologies for the disruption and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 27/08/2019 16:34 - 27/08/2019 16:50
  • Last Updated - 27/08/2019 16:53
Server inaccessible (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel4
  • We are currently investigating a problem with the LDeX1-cPanel4 server.

    Update (14:55): The server appears to have locked up, so we are performing an emergency reboot.

    Update (14:59): The server is back online again and all services are running normally. We will investigate further to see if we can establish what cause the server to lock up and we will closely monitor in case of any further issues. Please accept our apologies for the distruption and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 14/08/2019 14:53 - 14/08/2019 14:59
  • Last Updated - 14/08/2019 14:59
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 21:00 and 23:59 on 17/07/2019 in order to resolve an issue which is preventing us from taking backups.

    Update (22:35): The server is rebooting

    Update (22:43): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 17/07/2019 21:00 - 17/07/2019 23:59
  • Last Updated - 17/07/2019 22:44
Mail server software upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be upgrading the mail server software on the LDeX1-Plesk6 server on 03/07/2019 between 21:00 and 23:59. This will disrupt all web site and email functionality on the server for a few minutes during the upgrade and may require a reboot of the server once complete.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (21:11): The upgrade process has started.

    Update (21:24): The upgrade has finished successfully, we are now rebooting the server.

    Update (21:26):  The server has been rebooted and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 03/07/2019 21:00 - 03/07/2019 23:59
  • Last Updated - 03/07/2019 21:35
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel4
  • We will be performing a reboot of the LDeX1-cPanel4 server between 21:00 and 23:59 on 01/07/2019 in order to resolve an issue which is preventing us from taking backups.

    Update (22:24): The server is rebooting

    Update (22:30): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 01/07/2019 21:00 - 01/07/2019 23:59
  • Last Updated - 01/07/2019 22:30
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 22:00 on 11/06/2019 and 08:00 on 12/06/2019, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: We have received confirmation from our supplier that they have completed their maintenance work so we have successfully re-enabled our connections to them. Full redundancy has been restored to the network.

  • Date - 11/06/2019 22:00 - 12/06/2019 08:00
  • Last Updated - 12/06/2019 06:44
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be  performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 08/02/2019. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The server has been rebooted and all services are functioning normally again after approximately 3 minutes of downtime. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 11/06/2019 21:00 - 11/06/2019 23:59
  • Last Updated - 11/06/2019 23:04
DDoS Attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • We are currently experiencing a DDoS attack against one of our customers which is having a knock on impact on accessibility to our network in general. We are working to mitigate the attack and will post updates once available.

    Update (16:01): We have managed to mitigate the attack for the time being. We're monitoring the situation and will update once we feel the situation has resolved.

    Update (20:07): We haven't seen any further issues as a result of this attack, so we believe that this is now resolved. Please accept our apologies for the disruption caused by the arrack earlier today.

  • Date - 10/06/2019 15:53 - 10/06/2019 15:59
  • Last Updated - 10/06/2019 20:08
Virtual server management (Resolved)
  • Priority - Medium
  • Affecting System - Virtual servers
  • We will be carrying out some essential maintenance work on the management platform for our virtual servers on 05/06/2019 between 20:00 and 23:59. During this period customers may be unable to perform management tasks such as restarting, reinstalling, upgrading/downgrading or accessing the console via VNC on virtual servers.

    This maintenance work will not effect the virtual servers themselves, which will remain running as normal throughout the maintenance work.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

    Update: Unfortunately we need to postpone this work. There has been no impact to customer service. We will provide a new time and date as soon as possible.

    Update: This work has been rescheduled for 07/06/2019 at 20:00.

    Update: The maintenance work has been completed successfully and virtual servers can be managed as normal again.

  • Date - 07/06/2019 20:00 - 07/06/2019 23:59
  • Last Updated - 07/06/2019 23:13
Border router software update (EQMA1-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our EQMA1-RT1 border router in Equinix MA1 (formerly Telecity Williams House/Kilburn House) on 06/06/2019 starting at 20:00.

    This update will require us to reboot the EQMA1-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the EQMA1-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: Unfortunately we've run into some problems with this software update. We're going to have to leave the EQMA1-RT1 border router out of service for now and look to re-introduce it to the network tomorrow. In the mean time, the London router (THN-RT1) is functioning normally and we have ample capacity to run without the Manchester router.

    Update: The software update has been completed successfully and the EQMA1-RT1 router has been returned to normal service. Full redundancy has been restored to the network.

  • Date - 06/06/2019 20:00 - 07/06/2019 02:00
  • Last Updated - 07/06/2019 10:54
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 UPS
  • We have been notified by London Data eXchange that they will be carrying out work on the UPS in the LDeX1 facility between 09:00 and 17:00 on 03/04/2019 as part of their planned preventative maintenance programme.

    This work will involve a firmware update, visual inspection of the components in each of the UPS systems, functional testing and cleaning of the fans and as such the LDeX1 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX1 should be considered "at-risk" for the duration of this work.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously. During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode. This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed. Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 03/04/2019 09:00 - 03/04/2016 17:00
  • Last Updated - 30/04/2019 14:30
DDoS attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • We have detected a DDoS attack against one of our customers which saturated some parts of our core network. We have therefore taken the appropriate steps to mitigate this attack and restore service for affected customers.

    Due to the intermittent saturation of parts of the core network, some customers may have briefly seen packets loss affecting their services. This should be completely resolved as of 21:43.

    Please accept our apologies for the inconvenience caused by this attack and feel free to contact our support staff via the usual means if you are still experiencing any problems or have any questions or concerns.

    Update (10th April @ 23:33): We have not seen any further issues and therefore believe that this is resolved, however we continue to monitor the situation closely.

    Update (11th April @ 12:19): We have seen a return of DDoS traffic targeting the same customer which has saturated parts of our core network causing a brief outage. We have mitigated the attack and will continue to monitor the situation. 

    Update (30th April @ 14:29): We have not seen any further DDoS traffic and so are marking this issue as resolved.

  • Date - 10/04/2019 22:31 - 10/04/2019 22:43
  • Last Updated - 30/04/2019 14:29
RAID card replacement (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk6
  • We will be replacing the RAID card on the LDeX1-Plesk6 server between 19:00 and 21;00 on 29/03/2019. This will require us to shut the server down for the duration of the work. We are expecting this to last approximately 30 minutes.

    Please accept our apologies for the inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update (19:16):  The server is shutting down

    Update (19:35):  The RAID card and one of the SSDs have been replaced. The server is back online again and the RAID array is rebuilding. All services are back online and working normally.

    Update (21:107): The rebuild of the RAID array has completed successfully.

  • Date - 29/03/2019 19:00 - 29/03/2019 21:00
  • Last Updated - 29/03/2019 21:29
Hardware issue with LDeX1-Plesk6 (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-Plesk6
  • We are currently investigating a hardware issue with LDeX1-Plesk6 which is preventing the server from booting. All services hosted on LDeX1-Plesk6 are currently offline.

    Update (04:24): The server is back online and all services are running normally. We are working with the hardware vendor to investigate the root cause of this problem.
    Please accept our apologies for the distruption and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 11/03/2019 03:08 - 11/03/2019 04:24
  • Last Updated - 11/03/2019 04:37
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 21:00 and 23:59 on 05/03/2019 in order to resolve an issue which is preventing us from taking backups.

    Update (22:22): The server is rebooting

     

    Update (23:27): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 05/03/2019 21:00 - 05/03/2019 23:59
  • Last Updated - 05/03/2019 23:28
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 21:00 and 23:59 on 24/02/2019 in order to resolve an issue which is preventing us from taking backups.

    Update (22:35): The server is rebooting

     

    Update (22:40): The server has been rebooted successfully and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 24/02/2019 21:00 - 24/02/2019 23:59
  • Last Updated - 24/02/2019 22:44
Emergency reboot of LDeX1-VPS7 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS7
  • Following the earlier problems with the LDeX1-VPS6 node, we will be performing an emergency reboot of the LDeX1-VPS7 node between 21:00 and 23:59 today as we believe that there is a risk of it experiencing a similar issue.
    We will gracefully shut down all virtual servers running on the LDeX1-VPS7 node before performing the reboot and start them back up normally again afterwards. We except this to take approximately 30 minutes or so due to the number of virtual servers running on this node.

    Unfortunately due to the nature of this issue we have not been able to give our normal advanced notice, however this work is essential in order to ensure the ongoing stability of this server. Please accept our apologies for the inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Virtual servers on the LDeX1-VPS7 node are shutting down gracefully.

    Update: All virtual servers  have been shut down successfully and the LDeX1-VPS7 node is now rebooting.

    Update: The LDeX1-VPS7 node is back online and virtual servers are booting back up. This may take a little while due to the number of servers which need to start.

    Update: Most virtual servers are back online, we're just waiting for the last handful to finish starting up.

    Update: All virtual servers are back online. Please accept our apologies for the short notice and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 17/02/2019 21:00 - 17/02/2019 23:59
  • Last Updated - 17/02/2019 21:24
LDeX1-VPS6 outage (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS6
  • Some virtual servers hosted on the LDeX1-VPS6 node appear to be experiencing issues. We are currently investigating the cause.

    Update: We believe this relates to some routine software updates which were rolled back earlier following issues on another node. We are therefore performing an emergency reboot of the node.

    Update: It is taking some time to gracefully shut down all of the virtual servers on the node.

    Update: All virtual servers have been successfully shut down and the node is now rebooting.

    Update: The node is back online and virtual servers are booting back up. This may take a little while due to the number of servers which need to start.

    Update: Most virtual servers are back online, we're just waiting for the last handful to finish starting up.

    Update: All virtual servers are back online. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

    Update: The LDeX1-VPS6 node has been running the updated version of both the hypervisor and the management tools since the reboot earlier. We have been carefully monitoring the LDeX1-VPS6 node and it has remained stable, so we believe that the issue is fully resolved, however we will continue to keep a close eye on the node in case of any further issues.

  • Date - 17/02/2019 14:13 - 17/02/2019 15:22
  • Last Updated - 17/02/2019 20:48
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 00:00 and 05:00 on 15/02/2019, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 15/02/2019 00:00 - 15/02/2019 05:00
  • Last Updated - 16/02/2019 09:31
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be  performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 08/02/2019. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The server has been rebooted and all services are functioning normally again after approximately 14 minutes of downtime. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 08/02/2019 21:00 - 08/02/2019 23:59
  • Last Updated - 08/02/2019 22:52
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel3
  • We will be performing a reboot of the LDeX1-cPanel3 server between 21:00 and 23:59 on 29/01/2019 in order to resolve an issue which is preventing us from taking backups.

    Update: The server has been rebooted and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 29/01/2019 21:00 - 29/01/2019 23:59
  • Last Updated - 29/01/2019 23:29
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk4
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk4 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk4 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

    Update: We have begun the upgrade

    Update: The upgrade has been completed successfully.

  • Date - 19/01/2019 21:00 - 19/01/2019 23:59
  • Last Updated - 19/01/2019 21:42
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk3
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk3 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk3 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

    Update: We have begun the upgrade

    Update: The upgrade has been completed successfully.

  • Date - 19/01/2019 21:00 - 19/01/2019 23:59
  • Last Updated - 19/01/2019 21:33
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the LDeX1-Plesk1 server starting at 21:00 on 19/01/2019.

    Due to the nature of the work, all services hosted on the LDeX1-Plesk1 server may be briefly unavailable for a few minutes at some point during the maintenance window. We have scheduled a maintenance window of 21:00 to 23:59, however the upgrade should only take 10 minutes or so.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

    Update: We have begun the upgrade

    Update: The upgrade has been completed successfully.

  • Date - 19/01/2019 21:00 - 19/01/2019 23:59
  • Last Updated - 19/01/2019 21:22
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk6
  • We will be  performing scheduled maintenance including essential software updates on the LDeX1-Plesk6 server between 21:00 and 23:59 on 10/01/2019. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The server has been rebooted and all services are functioning normally again after approximately 2 minutes of downtime. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 10/01/2019 21:00 - 10/01/2019 23:59
  • Last Updated - 10/01/2019 21:57
Authoritative DNS name server migration (Resolved)
  • Priority - Medium
  • Affecting System - three.freethought-dns.net, six.freethought-dns.net and ns3.mypremiumserver.com
  • We will be migrating our three.freethought-dns.net, six.freethought-dns.net and ns3.mypremiumserver.com authoritative DNS name servers to new hardware between 09:00 and 13:00 on 20/12/2018, during which time they will not be responding to DNS queries.

    Our authoritative DNS name servers are part of a geographically diverse, fully redundant cluster and so DNS queries will continue to be handled by the other name servers for the duration of the maintenance work and there will be no disruption to service. However, as with all maintenance work of this nature, authoritative DNS services should be considered at risk for the duration of the maintenance window.

    Update (12:10): We're running a little bit behind schedule due to some unforeseen delays, so we're extending the maintenance window to 16:00.

    Update (14:40): Everything has been successfully migrated over to the new hardware and all DNS services are back online.

  • Date - 20/12/2018 09:00 - 20/12/2018 16:00
  • Last Updated - 20/12/2018 14:43
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 00:00 and 06:00 on 02/12/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 02/12/2018 00:00 - 02/12/2018 06:00
  • Last Updated - 20/12/2018 07:32
Reboot to resolve backup issue (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 21:00 and 23:59 on 28/11/2018 in order to resolve an issue which is preventing us from taking backups.

    Update: The server has been rebooted and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 28/11/2018 21:00 - 28/11/2018 23:59
  • Last Updated - 28/11/2018 21:31
Reduced peering capacity (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • LINX will be performing maintenance work on our IXManchester connection between 22:00 on 22/11/2018 and 09:00 on 23/11/2018. We will therefore be disabling all peering over IXManchester whilst this work is carried out.

    During this period, all traffic will be routed via our other peering connections on LINX LON1, LINX LON2 and LONAP as well as our three IP transit providers and so this will not be service affecting.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: LINX completed their maintenance work successfully and full peering capacity has been restored to the network.

  • Date - 22/11/2018 22:00 - 23/11/2018 09:00
  • Last Updated - 23/11/2018 10:13
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 02:00 and 05:00 on 14/11/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 14/11/2018 02:00 - 14/11/2018 05:00
  • Last Updated - 22/11/2018 10:59
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 00:00 and 06:00 on 30/05/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 02/11/2018 00:00 - 02/11/2018 06:00
  • Last Updated - 02/11/2018 09:44
Mail server maintenance (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We will be carrying out routine maintenance work affecting the mail server functionality of the LDeX1-Plesk1 server between 21:00 to 23:59 on 17/10/2018. We expect incoming (POP3/IMAP) and outgoing (SMTP) email to be unavailable for approximately 90 minutes during this maintenance.

    Incoming emails using our spam filter platform will be queued until the maintenance is complete and the delivered as normal, so no emails will be lost.

    Please accept our apologies for any inconvenience that this maintenance work may cause, however this is important to ensure the smooth running of the mail server.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update (22:15): The mail server has been stopped and we have begun the maintenance work

    Update (22:50): The maintenance work has been completed successfully and the mail server has been started back up. Normal service has been resumed. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 17/10/2018 21:00 - 17/10/2018 23:59
  • Last Updated - 17/10/2018 22:52
Reduced peering capacity (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • LINX will be performing maintenance work on our LON2 connection between 23:30 on 10/09/2018 and 09:00 on 11/09/2018. We will therefore be disabling all peering over LON2 whilst this work is carried out.

    During this period, all traffic will be routed via our other peering connections on LINX LON1, LINX Mancheser and LONAP as well as our three IP transit providers and so this will not be service affecting.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: LINX completed their maintenance work successfully and full peering capacity has been restored to the network.

  • Date - 10/09/2018 23:30 - 11/09/2018 09:00
  • Last Updated - 20/09/2018 09:35
Reboot to resolve backup issue (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 21:00 and 23:59 on 05/08/2018 in order to resolve an issue which is preventing us from taking backups.

    Update: The server has been rebooted and all services are functioning normally again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 05/08/2018 21:00 - 05/08/2018 23:59
  • Last Updated - 31/08/2018 16:08
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We have lost access to the LDeX1-cPanel1 server and the console indicates that the server has locked up and is unresponsive. We are therefor performing an emergency reboot in order to restore service as quickly as possible.

    Update: The LDeX1-cPanel1 server is back online and working normally again. We will continue to closely monitor the server whilst also investigating what caused the initial lock up.

  • Date - 28/08/2018 18:06 - 28/08/2018 18:12
  • Last Updated - 28/08/2018 18:14
SQL Server upgrade (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We will be upgrading the verson of SQL Server on the TMA01/Japetus server on 02/08/2018 between 21:00 and 23:59.

    Due to the nature of the work, any web sites or other services dependant upon SQL Server databases will be unavaialble during the updrage. Additionally, this upgrade will probably require at least one reboot, so all services hosted on the TMA01/Japetus server will be unavailable for portions of the maintenance window.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

    Update: Unfortunately we have been unable to complete this upgrade after running into some issues with SQL Server. We will investigate these further and then re-schedule the maintenance work.

    Update: We believe that we have now resolved the issues which were preventing us from carrying out this upgrade and so will be re-scheduling this work for 05/08/2018 between 21:00 and 23:59.

    Update: The upgrade to SQL Server 2014 has been completed successfully

  • Date - 05/08/2018 21:00 - 05/08/2018 23:59
  • Last Updated - 05/08/2018 22:44
Intermittent network disruption (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • We are currently experiencing intermitte network disruption due to an ongoing attack against a customer on our network. We are working to mitigate the attack and will post updates when we have them.

    Update: The attack was sucesfully mitigated and the nework has remained stable since these mitigations were put in place.

  • Date - 26/07/2018 21:47 - 26/07/2018 22:03
  • Last Updated - 26/07/2018 23:43
Border router software update (THN-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our THN-RT1 border router in Telehouse North on 16/07/2018 starting at 21:00.

    This update will require us to reboot the THN-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the THN-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The upgrade has been completed successfully and full redundancy has been restored to the network.

  • Date - 16/07/2018 21:00 - 17/07/2018 01:00
  • Last Updated - 16/07/2018 22:35
Border router software update (EQMA1-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our EQMA1-RT1 border router in Equinix MA1 (formerly Telecity Williams House/Kilburn House) on 09/07/2018 starting at 21:00.

    This update will require us to reboot the EQMA1-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the EQMA1-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: Unfortunately we have had to re-schedule this maintenance work for 21:00 on 10/07/2018 to 01:00 on 11/07/2018

    Update: The upgrade has been completed successfully and full redundancy has been restored to the network.

  • Date - 10/07/2018 21:00 - 11/07/2018 01:00
  • Last Updated - 11/07/2018 00:33
Cloud server hypervisor maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Cloud server platform
  • We will be performing a software update to the compute hypervisors for the Freethought cloud environment which hosts our shared, reseller, and website builder products as well as some other supporting services such as DNS, spam filtering etc. and any cloud server customers.

    The maintenace should not be seriously service affecting however there will be brief moments of inaccessibility whilst cloud servers migrate from one hardware node to another during the maintenance period. During the maintenance window services hosted within the cloud environment should be considered at risk.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The upgrade has been completed successfully

  • Date - 03/07/2018 21:00 - 03/07/2018 03:00
  • Last Updated - 04/07/2018 01:08
Cloud server storage maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Cloud server platform
  • We will be performing a software update to the storage platform for the Freethought cloud environment which hosts our shared, reseller, and website builder products as well as some other supporting services such as DNS, spam filtering etc. and any cloud server customers.

    The maintenance should not be seriously service affecting, however there may be brief moments of inaccessibility whilst we failover between storage controllers during the maintenance period. During the maintenance window services hosted within the cloud environment should be considered at risk.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The upgrade has been completed successfully
    .

  • Date - 02/07/2018 21:00 - 02/07/2018 23:59
  • Last Updated - 02/07/2018 22:24
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on their equiment in LDeX2 between 01:00 and 03:00 on 01/07/2018 during which time they will perform an emergency reboot of a core device due to a memory leak.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 01/07/2018 01:00 - 01/07/2018 03:00
  • Last Updated - 01/07/2018 21:20
Reboot of TMA01/Japetus (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing a reboot of the TMA01/Japtus server on 01/06/2018 between 21:00 and 23:59 in order to address an issue with TLS 1.2 and FTPS. This should take approximately 10 minutes or so to complete.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

  • Date - 01/06/2018 21:00 - 01/06/2018 23:59
  • Last Updated - 16/06/2018 10:45
Backhaul network at risk (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network provider have informed us that Zayo have suffered a fibre break between London and Manchester which is affecting one of their 10Gbps wavelengths and thus the backhaul network is currently at-risk.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this is not service affecting for any customers. The network should however be considered at-risk until this issue is resolved as a further failure on another leg of the backhaul network would lead to the network becoming partitioned.

    If you have any questions or concerns about this, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: Zayo have splicing teams ready to repair the damaged section of fibre, but are currently struggling to gain the necessary access in order to do so.

    Update: Zayo have completed the repair work and full redundancy has been restored

  • Date - 31/05/2018 20:50 - 01/06/2018 17:51
  • Last Updated - 01/06/2018 19:59
Reduced peering capacity (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • LINX will be migrating our LON2 connection from the old Extreme network to the new EdgeCore/IP Infusion network between 23:30 on 31/05/2018 and 09:00 on 01/06/2018. We will therefore be disabling all peering over LON2 whilst this work is carried out.

    During this period, all traffic will be routed via our other peering connections on LINX LON1, LINX Mancheser and LONAP as well as our three IP transit providers and so this will not be service affecting.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 31/05/2018 23:30 - 01/06/2018 09:00
  • Last Updated - 01/06/2018 07:20
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 00:00 and 03:00 on 30/05/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 30/05/2018 00:00 - 30/05/2018 03:00
  • Last Updated - 30/05/2018 09:51
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on their equiment in LDeX1 and Telehouse North between 22:00 on 25/05/2018 and 04:00 on 26/05/2018 as they perform software updates on their core routers as well as their WDM platform.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 25/05/2018 22:00 - 26/05/2018 04:00
  • Last Updated - 29/05/2018 09:58
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 01:00 and 04:00 on 29/05/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 29/05/2018 01:00 - 29/05/2018 04:00
  • Last Updated - 29/05/2018 09:58
Cloud server maintenance (Resolved)
  • Priority - High
  • Affecting System - Cloud server platform
  • We will be performing maintenance work on the cloud server platform between 21:00 on 27/04/2018 and 09:00 on 28/04/2018. During this time, we may need to reboot some virtual machines, so there will be a brief disruption to service for the impacted servers.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The vast majority of the maintenance work has been completed successfully. We will need to perform follow up maintenance affecting only the LDeX1-cPanel4 and TMA01/Japetus servers this evening (21:00 on 28/04/2018 to 09:00 on 29/04/2018) as we were unable to complete them during the first window.

    Update: The remaining maintenance work has been rescheduled for 21:00 on 03/05/2018 to 09:00 04/05/2018

    Update: Part of the maintenace work was completed sucesfully (migrating TMA01/Japetus), however we were not able to complete all of the work due to a system wide outage on the cloud server platform, possibly triggered by the maintenance work. We will investigate what happened and reschedule any remaining maintenance work at a future time once we are confident that it can be completed without any further major disruption.

  • Date - 03/05/2018 21:00 - 04/05/2018 09:00
  • Last Updated - 04/05/2018 07:39
Cloud server outage (Resolved)
  • Priority - Critical
  • Affecting System - Cloud server platform
  • Between 03:26 and 06:00 on 04/05/2018 we experienced a system wide failure on the cloud server platform in London. All virtual machines running on the platform locked up and would not restart.It seems that storage performance on the platform w as poor to the point of being unusable. After troubleshooting for some time, we eventually had to perform a complete restart of the entire cloud server platform in order to resolve this issue.

    Once all virtual servers were shut down, the compute nodes were restarted and we then performed a restart on the primary controller on the SAN, triggering a failover to the secondary controller.This seems to have  cleared the issue responsible for the original outage and allowed us to start virtual machines back up normally. Services began coming back online around 05:30 and were fully restored by 06:00.

    If you have any questions about this outage or if you are still experiencing any issues, then please get in touch with our helpdesk in the ususal manner.

    This is being posted retrospectively as the customer portal was also impacted by this outage.

  • Date - 04/05/2018 03:26 - 04/05/2018 06:00
  • Last Updated - 04/05/2018 06:09
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on their equiment in LDeX2 between 22:00 on 20/04/2018 and 04:00 on 21/04/2018 as they perform software updates on their core routers.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 20/04/2018 22:00 - 20/04/2018 04:00
  • Last Updated - 22/04/2018 20:06
Reduced transit and peering redundancy (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • We will be disabling all transit and peering connectivity at Equinix MA1 between 21:30 on 06/04/2018 and 09:30 on 07/04/2018 due to maintenance work taking place on our backhaul connections in the MA1 data centre.

    During this period, all traffic will be routed via the Telehouse North (THN) data centre in London and so this will not be service affecting, however the network will be operating with reduced redundancy and so should be considered at risk for the duration of this maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 06/04/2018 21:30 - 06/04/2018 09:30
  • Last Updated - 13/04/2018 20:00
DDoS attack against LDeX1-cPanel1 (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • A customer hosted on the LDeX1-cPanel1 server is the target of a DDoS attack which is causing occasional disruption of 1-2 minutes to web sites hosted on this server. We are blocking the attack, however as it keeps shifting we have to additional blocks in place each time.

    Update: We have put some additional measures in place to better handle this attack.

    Update: We are making some changes to the web server software in order to better handle these attacks. These changes themselves will be somewhat disruptive, however they should hopefully minimise any further disruption from the attack.

    Update: We have completed the software changes and combined with the measures that we put in place earlier, it looks like this has had the desired effect.

    Update: We haven't seen any further disruption for over an hour now, so we believe that this is under control.

  • Date - 05/04/2018 12:24 - 05/04/2018 13:37
  • Last Updated - 05/04/2018 14:49
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 00:00 and 05:00 on 13/03/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 13/03/2018 00:00 - 13/03/2018 05:00
  • Last Updated - 26/03/2018 16:21
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 00:00 and 05:00 on 12/03/2018, during which time the connection will be unavailable.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 12/03/2018 00:00 - 12/03/2018 05:00
  • Last Updated - 26/03/2018 16:21
Potential DDoS attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • We have observed a brief but large influx of traffic onto the network between 01:59 and 02:02 causing packet loss. This may have been a DDoS attack, however because it was so short lived we were unable to investigate thoroughly before it ceased. We continue to monitor the network in case of any further such activity.

  • Date - 01/03/2018 01:59 - 01/03/2018 02:02
  • Last Updated - 01/03/2018 02:34
Backup maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX2-Back1
  • We will be carrying out maintenance work on the LDeX2-Back1 R1Soft CDP backup server on 10/02/2018 starting at 21:00, during which time the server will be unavailable. No new backups will be taken and it will not be possible to perform any restores during this maintenance.

  • Date - 10/02/2018 21:00 - 11/02/2018 09:00
  • Last Updated - 15/02/2018 21:21
Reboot to resolve backup issue (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a reboot of the LDeX1-Plesk1 server between 22:00 and 23:59 on 02/01/2018 in order to resolve an issue which is preventing us from taking backups.

    Update (02/01/2018 22:20): The server is now rebooting

    Update (02/01/2018 22:29):
     The LDeX1-Plesk1 server is back online again. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 02/01/2018 22:00 - 02/01/2018 23:59
  • Last Updated - 02/01/2018 22:31
Additional disk space for LDeX1-cPanel4 (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel4
  • We are allocating additonal disk space to the /home partition on the LDeX1-cpanel4 server. This will require us to shut the server down briefly.

    Update: Unfortunately whilst resizing the filesystem we have encountered a problem which has required us to restore the /home partition from backups taken earlier today.

    Update: We are currently estimating that the restore will finish some time between 05:30 and 06:30.

    Update: Unfortunately the rate of restore has slowed somewhat and so the current esitmate is that it will finish some time betwen 07:30 and 08:00

    Update: The restore has finished and the LDeX1-cPanel4 server is back online. We are currently checking the server to make sure that everything is functioning correctly.

    Update: Everything appears to be working correctly on the LDeX1-cPanel4 server and we have deliverd all queued emails from our anti-spam platform. We will be sending an email with full details to all affected customers shortly, but in the mean time please get in touch with our helpdesk in the usual manner if you are experiencing any problems.

  • Date - 18/12/2017 22:50 - 18/12/2017 08:09
  • Last Updated - 19/12/2017 08:33
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 23:00 on 24/11/2017 and 07:00 on 25/11/2017 in order for their supplier to cut and splice long haul fibres in Leeds as part of network expansion work.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This has been rescheduled to take place between 23:00 on 08/12/2017 and 07:00 on 09/19/2017.

    Update: This maintenance work has been completed successfully.

  • Date - 08/12/2017 23:00 - 09/12/2017 07:00
  • Last Updated - 12/12/2017 11:25
Cloud server storage maintenance (Resolved)
  • Priority - High
  • Affecting System - Cloud server platform
  • Following the issues with the SAN handling the storage for our cloud server platform on 02/12/2017, we have been advised by Dell to carry out a firmware update which resolves a memory leak bug. We will be doing this between 22:00 on 06/12/2017 and 02:00 on 07/12/2017.

    The SAN has two controllers and so one should always be online and handling storage activity whilst the other is being upgraded, so we do not exepct this work to have any noticable impact beyond a few seconds pause in disk activity whilst failing between controllers, however the entire maintenance period should be considered "at risk".

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The firmware update has been completed sucesfully.

  • Date - 06/12/2017 22:00 - 07/12/2017 02:00
  • Last Updated - 06/12/2017 23:29
Cloud server storage issues (Resolved)
  • Priority - Critical
  • Affecting System - Cloud server platform
  • One of the nodes on our cloud server platform has lost connection to the storage network and so all virtual machines on this node have lost access to their virtual disks.

    Update: The node has been re-connected to the storage and all affected virtual machines have been rebooted. Normal service has now resumed.

    Update: Two more nodes have suffered the same problem. This appears to be an uptime bug in the iSCSI multipathing code in XenServer as far as we can tell.

    Update: The LDeX1-Plesk1 and LDeX1-cPanel1 hosting servers are among the virtual machines which have been affected by this issue. We are currnetly performing an emerency reboot of both servers.

    Update: THe LDeX1-Plesk1 server seems to have got stuck mounting the filesystems, so we are rebooting it again.

    Update: We are performing a filesystem check on the LDeX1-cPanel1 server.

    Update: The LDeX1-Plesk1 server is back online, although the load is currently quite high.

    Update: The load on the LDeX1-Plesk1 server is back to normal.

    Update: The LDeX1-cPanel1 server is back online agian.

    Update: Following the previous update the issue began affecting the entire cloud platform with a complete loss of storage to all cloud nodes. This impacted all our shared, reseller, and Windows hosting as well as any other customers or internal systems running inside our cloud environment. The issue appears to relate to one of the SAN controllers but the SAN has a redundant controller that was eventually able to take over. Subsequently some servers needed an additional reboot as their filesystems had gone read only causing a variety of errors on hosted websites and email. The issue does now appear to be stable but we are monitoring the situation carefully and will provide more inforamtion when it becomes available.

    Update: We have been monitoring the cloud platform and have not experienced any more problems since services were restored at approximatley 1730, we now consider this issue resolved. We will continue to work with our hardware vendors to determine the root cause of the problem and what steps can be taken to prevent a reoccurance. If anyone experiences any continued problems please don't hesitate to contact support in the usual manner.

  • Date - 02/12/2017 15:02 - 02/12/2017 17:30
  • Last Updated - 02/12/2017 22:00
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 10:00 and 22:00 on 26/11/2017 in order for their supplier to move fibres in the Trafford Park area due to construction work taking place.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 26/11/2017 10:00 - 26/11/2017 22:00
  • Last Updated - 28/11/2017 12:38
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 06:00 and 14:00 on 19/11/2017 in order for their supplier to cut and splice long haul fibres in Leeds as part of network expansion work.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 19/11/2017 06:00 - 19/11/2017 14:00
  • Last Updated - 28/11/2017 12:38
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between Telehouse North and Equinix MA1 (formerly Telecity Williams House/Kilburn House) between 00:00 and 05:00 on 28/11/2017 in order for their supplier to replace a failing card.
    A backup window of 00:00 and 05:00 on 29/11/2017 has also been scheduled, in case any further work is required.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully without the need to make use of the backup window.

  • Date - 28/11/2017 00:00 - 28/11/2017 05:00
  • Last Updated - 28/11/2017 12:30
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on their equiment in LDeX1 between 22:00 on 17/11/2017 and 04:00 on 18/11/2017 as they remove redundant hardware from their rack and perform software updates on their core routers.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed successfully.

  • Date - 17/11/2017 22:00 - 18/11/2017 04:00
  • Last Updated - 19/11/2017 15:28
Manchester data centre loss of connectivity (Resolved)
  • Priority - Critical
  • Affecting System - AS4100 Network
  • We are aware of a connectivity issue to our Manchester data centre (LDeX2) we are investigating and working to restore connectivity as quickly as possible. More updates will be posted when available.

    Update @ 12:48 - Connectivity has been restored, we apologies for any inconvenience this caused. We will continue to monitor the situation to ensure there is no repeat of the issue.

  • Date - 09/11/2017 12:43 - 09/11/2017 12:48
  • Last Updated - 09/11/2017 22:30
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 03:00 and 08:00 on 05/11/2017 in order for their supplier to inspect and clean some fibres in Manchester.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed without incident.

  • Date - 05/11/2017 03:00 - 05/11/2017 08:00
  • Last Updated - 06/11/2017 12:43
IP transit outage (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 Network
  • We have suffered a loss of one of our IP transit providers in Telehouse North (London), this may have resulted in a brief disruption of service for some customers whilst our network re-routed the affected traffic onto alternative providers.

    We have four other connections to two different IP transit providers as well as extensive peering at four internet exchanges, so we have ample capacity to handle the loss of a single supplier and still maintain network redundancy.

    Update (09:45): The IP transit provider in question has acknowledged the outage and are investigating. This is a wider issue affecting several of their clients.

    Update (10:41): Our connection to the IP transit provider has come back online and appears to be functioning normally again. We are waiting for them to provide us with a full explanation as to exactly what happened in order to cause this issue.

    Update (10:49): The IP transit provider has confirmed that service has been restored. The device terminating our connections lost power, however they do not yet know what the root cause of this was and so consider the service to be at risk until this has been established.

    Update (11:42): Our connection has gone down again. Once again this may have resulted in a brief disruption for some customers whilst the affected traffic was re-routed onto alternative paths.

    Update (11:47): The connection is back up.

    Update (11:49): The connection has gone down for a third time.

    Update (11:52): The connection is back up again.

    Update (11:54): 
    We have disabled the provider in question until the connection stabilises and we receive an explanation as to exactly what has caused the problems this morning as well as what has been done to address this permanently.

    Update (12:55): The IP transit provider has confirmed that they have identified the root cause of the power issue and taken appropriate steps to correct it. They consider the issue resolved and the service stable, however we will leave the connection disabled and continue to observe it for a while before re-introducing them to our network.

    Update (23:04): The connection has remained stable, so we have brought it back into service. Full redundnacy and capacity has been restored to the network.

  • Date - 31/10/2017 09:29 - 31/10/2017 23:04
  • Last Updated - 31/10/2017 23:05
Suspected switch failure (Resolved)
  • Priority - Critical
  • Affecting System - LDeX1-SW02
  • We are currently investigating the possible failure of the LDeX1-SW02 switch. All customers directly connected to this switch will currently be offline.

    Update (07:22): All affected customers have been moved to another switch and should be back online. If you are still experiencing any issues then please get in touch with our helpdesk in the usual manner.

  • Date - 20/10/2017 06:01 - 20/10/2017 07:22
  • Last Updated - 20/10/2017 07:34
Intermittent issues on TMA01/Japetus (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We are seeing intermittent issues on the TMA01/Japetus server which may be causing web sites to load slowly or not at all.

    Update: Several key Windows management functions are unresponsive, so we are performing an emergency reboot of the server.

    Update: The reboot is taking longer than normal due to installing pending updates

    Update: We have temporarily suspended a site on the server which we believe may be being attacked. The server seems to be stable at the moment, however we are continuing to monitor the situation

    Update: The server is experiencing renewed intermittent issues which we are attempting to diagnose, we will update this status notice once the issue has been identified.

    Update: The server appears to be stable again, however we are continuing to monitor the situation.

    Update (06/09/2017): We have seen a further period of high CPU usage  causing issues between 12:33 at 13:07. Unfortunately we have not been able to determine the cause of this due to the system being almost completely unresponsive during this time.
    The server is currently responsive again and we are working on the best way of collecting additional data so that we can see exactly what is causing this problem.

    Update (07/09/2017): A further occurrence of this issue started at 04:20 and is currently ongoing. We are performing an emergency reboot of the server as we are unable to access it remotely.

    Update (07/09/2017): Unfortunately we are experiencing issues getting Windows to boot on the TMA01/Japetus server. We are continuing to investigate.

    Update (07/09/2017 17:41): Windows is completing a file system check of the server which is taking some time to complete. We are continuing to monitor the server and will update this status message with updates as they become available.

    Update (07/09/2017 11:55): The Windows server is continuing to complete a check disk, we are hesitant to interrupt this process as it could damage the file system of the server. At this time we are erring on the side of caution and are simply patiently awaiting the completion of the check disk. 

    Update (07/09/2017 12:50): After investigating further the check disk had seemingly stalled, we observed zero CPU utilisation and zero disk activity suggesting the check disk was not doing anything. We have rebooted the server and a fresh check disk has begun which appears to be progressing much quicker than before.

    Update (07/09/2017 14:27): The second check disk stalled at the same point as before with the same lack of CPU or disk activity. We have rebooted the server and skipped the check disk, which has allowed it to nominally boot but it is completely unresponsive. We are continuing to investigate, however we have also started restoring the most recent backups to another server, in case they need to be used.

    Update (07/09/2017 17:36): We have run a further check disk from the Windows repair/recovery environment on the installation CD, however this once again hung part way through. We are currently trying other methods to attempt to recover the Windows installation. The restoration of the backups is now just over half way rhought, with an estimated 4 hours remaining.

    Update (07/09/2017 20:09): We are continuing to work on recovering the TMA01/Japetus server, however we increasingly believe that we will need to rely on the ongoing backup restoration which is running in parallel. We currently believe that this will complete around 10PM and so we should hopefully be able to get the server back online at some point tonight.

    Update (07/09/2017 22:22): We have finished restoring the TMA01/Japetus server from backups and it is now back online again. Unfortunately this means that any data changed between the backup being taken and the srever going offline will have been lost. All emails queued on our spam filters are currently being delivered.
    We apologise for the inconvinience caused by this extended outage and will be arranging service credits for all affected customers tomorrow. All services should now be back online and working normally. Please contact our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 28/08/2017 23:07 - 29/08/2017 00:20
  • Last Updated - 07/09/2017 22:36
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the fibre paths between LDeX2 and Equinix MA1 between 18:30 on 14/08/2017 and 07:00 on 15/08/2017 for fibre re-splicing.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 14/08/2017 18:30 - 15/08/2017 07:00
  • Last Updated - 06/09/2017 12:46
Routine facility breaker maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX2
  • We have been notified by London Data eXchange that they will be carrying out work main incoming breaker and generator breaker on the LV switchgear panel in the LDeX2 facility between 09:00 and 17:00 on 25/08/2016 as part of their planned preventative maintenance programme.

    This work will involve a visual inspection of the switchgear as well as service and testing of both of the breakers and as such the LDeX2 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX2 should be considered "at-risk" for the duration of this work.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 22/08/2017 08:30 - 22/08/2017 17:00
  • Last Updated - 06/09/2017 12:46
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 20:00 on 18/07/2017 and 05:00 on 19/07/2017 in order to apply routine software updates, during which time the circuit will be down for approxumately 15 minutes.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 18/07/2017 20:00 - 19/07/2017 05:00
  • Last Updated - 24/07/2017 15:36
DDoS attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • We have seen a DDoS attack targetting a customer. This attack was saturating some core network links so we have temporerily null routed the server in question in order to restore service to other customers. This was complete by 22:19 and normal service resumed to all other customers.

  • Date - 18/07/2017 21:59 - 18/07/2017 22:19
  • Last Updated - 18/07/2017 22:42
Emergency reboot of LDeX1-VPS2 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS2
  • We are currently carrying out an emergency reboot of the LDeX1-VPS2 server due to problems with the Xen hyerpvisor running on it. All virtual servers running on this node will be unavailable for the duration of the reboot.

    Update: The LDeX1-VPS2 node is back online and all virtual servers have been restarted. Please accept our apologies for the inconvience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any issues.

  • Date - 14/07/2017 16:25 - 14/07/2017 16:43
  • Last Updated - 14/07/2017 16:46
Border router software update (THN-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our THN-RT1 border router in Telehouse North on 23/06/2017 starting at 21:00.

    This update will require us to reboot the THN-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the THN-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work has been re-scheduled for 28/06/2017 between 21:00 and 23:59.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 28/06/2017 21:00 - 28/06/2017 23:59
  • Last Updated - 28/06/2017 22:44
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 03:00 and 05:00 on 01/07/2017. During this period the connection will be unavailable for up to 30 minutes.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

  • Date - 01/07/2017 03:00 - 18/07/2017 18:34
  • Last Updated - 26/06/2017 18:30
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between THN and Equinix MA1 between 22:00 on 18/06/2017 and 05:00 on 19/06/2017 in order to re-locate the fibre as a result of road works on the M6.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 18/06/2017 22:00 - 19/06/2017 05:00
  • Last Updated - 22/06/2017 10:18
LDeX1-VPS1 unresponsive (Resolved)
  • Priority - Critical
  • Affecting System - LDeX1-VPS1
  • We are investigating an issue with the LDeX1-VPS1 node which is currently unresponsive.

    Update: It seems that the LDeX1-VPS1 server may have suffered a hardware failure as we are currently unable to get the server to power on.

    Update (01:35 20/06/2017): After physically disconnecting the power supply the server has now booted sucesfully and all VPS have been started back up. We will continue to monitor the server in case of any further issues.

    Update (11:02 20/06/2017): This issue seems to have reocured.

    Update (13:20 20/06/2017): The server is still offline, hard reboot did briefly restore service but it was unstable. We have arranged for the PSU to be swapped for another, this may resolve the issue. We have also dispatched spare motherboard and other parts via same day courier which should arrive towards the end of the afternoon should they be required.

    Update (18:51 20/06/2017): The spare parts have arrived at the data centre and we are swapping the motherboard, CPU and RAM over now.

    Update (21:25 20/06/2017): We are still seeing the same problem after swapping the motherboard, CPU and RAM so are now examining the RAID card.

    Update (22:47 20/06/2017): We have run into a problem with the new RAID card - the ports are in a different position and so the cables are not long enough to reach. We are trying to work out the best way around this.

    Update (23:43 20/06/2017): We have been unable to find a way to work around the problem with the cables, so longer cables have been ordered for delivery tomorrow (21st).

    Update (13:08 21/06/2017): Unfortunately the server is still exhibitting the same behaviour with the new cables. We are continuing to try and determine the root cause.

    Update (13:30 21/06/2017):
     The server is currently online and working, although we haven't made any changes so we aren't sure if this will remain the case. We continue to try and investigate the root cause of the fault.

    Update (14:00 21/06/2017): The server has remained stable for 30 minutes, however as we can't be confident that the issue won't reoccur we are making arrangements to migrate customers to alternative nodes.

    Update (19:29 21/06/2017): The server has failed again.

    Update (21:52 21/06/2017): We have been unable to get the server to boot again, so we are moving the hard drives to another chassis. We will then use this temporary chassis to access the data on the RAID array and migrate the servers to other VPS nodes.

    Update (22:54 21/06/2017): We have sucesfully booted the drives in the temporary chassis and have begun migrating the VPS to other servers.

    Update (22:54 21/06/2017): All VPS have been sucesfully migrated to other servers.

  • Date - 20/06/2017 00:37 - 22/06/2017 00:52
  • Last Updated - 21/06/2017 23:52
IP transit outage (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have seen both of our connections to one of our IP transit suppliers in Telehouse North go physically down. All traffic has been automatically re-routed over alternative paths.

    We have two other IP transit providers as well as extensive peering at four internet exchanges, so we have ample capacity to handle the loss of a single supplier and still maintain redundancy.

    Update: The connections have come back up again. We have contacted the supplier in question to find out what happened.

    Update: We have seen a second flap on the connection, so we have disabled it until the supplier confirms that they have identified the route cause and are confident that the service has stabilised.

    Update: Our IP transit provier has identified a loose fibre patch which they believe may have caused this issue. They have replaced the fibre patch in question and are monitoring the stability of the connection, as are we. Once that we are happy that it is now stable will will bring it back into service.

    Update: We have seen a further flap on our IP transit circuit, so replacing the loose fibre patch hasn't resolved the issue. Our IP transit provider are continuing to investigate the issue.

    Update: Our IP transit provider has identified a second loose fibre patch which has also been replaced and the circuit has been stable for over 5 hours. We have therefore re-enabled the connection and will continue to monitor it.

  • Date - 20/06/2017 11:22 - 20/06/2017 11:26
  • Last Updated - 20/06/2017 18:20
DDoS attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • We have seen a DDoS attack targetting the TMA01/Japetus server. This attack was saturating some core network links so we have temporerily null routed the TMA01/Japetus IP address in order to restore service to other servers. This was complete by 14:16 and normal service resumed to all servers except TMA01/Japetus.

    We have subsequently been able to restore service to the TMA01/Japetus server at approximately 14:27. We will continue to monitor the network for any sign of further attacks.

    Update: We have seen a further attack starting at approximately 19:52. This is once again targetting the TMA01/Japetus server, which we once again null routed at approximately 19:58 in order to restore service to the rest of the network whilst we investigated the attack further. This null route was then removed at 20;13 and normal service was resumed for TMA01/Japetus.

    Update: We have bought additional capacity online in the core network as well as put some extra protection in place for the server being targetted. Hopefully this should be sufficient to mitigate any further attacks.

    Update: We have not seen any further DDoS attacks.

  • Date - 18/06/2017 14:11 - 18/06/2017 14:27
  • Last Updated - 20/06/2017 10:10
Border router software update (EQMA1-RT1) (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing a software update on our EQMA1-RT1 border router in Equinix MA1 (formerly Telecity Williams House/Kilburn House) on 16/06/2017 starting at 21:00.

    This update will require us to reboot the EQMA1-RT1 border router and so will be service affecting for any customers with services terminating directly on the router such as IP transit. Other services will not be affected as we will gracefully route traffic away from the router prior to performing the update.

    We have sufficient upstream IP transit and peering capacity elsewhere in the network to handle the load from the EQMA1-RT1 border router without causing congestion, however we will be running without redundancy and so the network should be considered at-risk for the duration of the work.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 16/06/2017 21:00 - 16/06/2017 23:59
  • Last Updated - 16/06/2017 23:26
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between THN and Equinix MA1 between 22:00 on 09/07/2017 and 05:00 on 10/07/2017 in order to re-locate the fibre as a result of the expansion of Totworth Quarry.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work has been re-scheduled to take place between 22:00 on 15/07/2017 and 05:00 on 16/07/2017.

  • Date - 15/07/2017 22:00 - 18/07/2017 18:34
  • Last Updated - 15/06/2017 19:59
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections between 23:00 on 30/05/2017 and 06:00 on 31/05/2017. During this period the connection will be unavailable for up to 30 minutes.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 30/05/2017 23:00 - 31/05/2017 06:00
  • Last Updated - 12/06/2017 14:15
Plesk upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance work to upgrade the Plesk control panel software on the TMA01/Japetus server starting at 21:00 on 12/05/2017.

    Due to the nature of the work, all services hosted on the TMA01/Japetus server may be periodically unavailable for portions of the maintenance window. We have scheduled a maintenance window of 21:00 on 12/05/2017 to 09:00 on 13/05/2017, however we hope to complete the service affecting porting of this work well within this window.

    This upgrade will bring new updated software such as PHP and new features, but also lay the groundwork for a future migration of your hosting service to a newer version of Windows Server, IIS and SQL Server running on newer hardware.

    If you have any questions about this maintenance or about any other aspect of your hosting please don't hesitate to get in touch with our helpdesk in the usual manner or give us a call on 03300 882130.

    Update: Unfortunately we have encountered some problems with this update which have caused us to overrun the maintenance window. Currently the Plesk control panel is inaccessible, however web and email services are working normally.

    Update: The issues encountered with the upgrade process have been overcome and the Plesk control panel is now once again accessible and the upgrade is complete.

  • Date - 12/05/2017 21:00 - 13/05/2017 16:24
  • Last Updated - 13/05/2017 16:24
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 22:00 on 16/05/2017 and 00:00 on 17/05/2017, during which time the circuit will be down for approxumately 30 minutes.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been cancelled

  • Date - 16/05/2017 22:00 - 17/05/2017 00:00
  • Last Updated - 12/05/2017 13:29
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be scheduled maintenance work being carrying out on one of the circuits between Telehouse North (THN) and LDeX1 between 00:01 and 07:01 on 09/05/2017 in order to apply routine software updates.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been completed sucesfully

  • Date - 09/05/2017 00:01 - 09/05/2017 07:01
  • Last Updated - 12/05/2017 13:26
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that there will be emergency maintenance work being carrying out on one of the circuits between LDeX1 and LDeX2 between 18:00 and 23:59 on 02/05/2017 during which time the circuit will be at risk for approxumately 20 minutes.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been completed sucesfully

  • Date - 02/05/2017 18:00 - 02/05/2017 23:59
  • Last Updated - 03/05/2017 14:07
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 UPS
  • We have been notified by London Data eXchange that they will be carrying out work on the UPS in the LDeX1 facility between 09:00 and 17:00 on 25/04/2017 as part of their planned preventative maintenance programme.

    This work will involve a firmware update, visual inspection of the components in each of the UPS systems, functional testing and cleaning of the fans and as such the LDeX1 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX1 should be considered "at-risk" for the duration of this work.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously. During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode. This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed. Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 25/04/2017 09:00 - 25/04/2017 17:00
  • Last Updated - 25/04/2017 22:16
Upstream network maintenance (Resolved)
  • Priority - Low
  • Affecting System - AS41000 network
  • One of our upstream network providers have informed us that they will be carrying out routine maintenance work on one of our transit connections on 02/04/2017 between 00:00 and 05:00. During this period the connection will be unavailable for up to 45 minutes.

    On top of our substantial public peering, we have five upstream connections to three transit networks split across two data centres and significant spare capacity so we do not anticipate any impact to service as a result of this maintenance work, however as with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 02/04/2017 00:00 - 02/04/2017 05:00
  • Last Updated - 15/04/2017 19:12
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We have been notified by our backhaul connectivity provider that they will be carrying out scheduled maintenance work on the devices which terminate our connections in LDeX1 between 22:00 and 04:00 on 07/04/2017. During this period they will need to reboot each device in turn, which will result in a loss of service for approximately 5-10 minutes. These reboots will be at least 1 hour apart in order to minimise disruption and ensure network stability.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.
    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 07/04/2017 22:00 - 08/04/2017 04:00
  • Last Updated - 15/04/2017 19:12
Reboot of TMA01/Japetus (Resolved)
  • Priority - Low
  • Affecting Server - TMA01/Japetus
  • We will be performing a reboot of our Plesk Windows server (TMA01/Japetus) at 2100 on 5th April to fix an issue identified with the .Net framework. The reboot should only last for a few minutes however we have scheduled 30 minutes in case additional reboots are required.

  • Date - 05/04/2017 21:00 - 05/04/2017 21:30
  • Last Updated - 05/04/2017 21:36
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that our connection between Telehouse North (THN) and Equinix MA1 (formerly Telecity Williams House and Kilburn House) will be impacted by fibre re-routing work being carried out on the route between MA1 and THN between 21:00 on 18/03/2017 and 06:00 on 19/03/2017.
    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.

    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 18/03/2017 21:00 - 19/03/2017 06:00
  • Last Updated - 30/03/2017 11:01
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that our connection between Telehouse North (THN) and Equinix MA1 (formerly Telecity Williams House and Kilburn House) will be impacted by fibre re-routing work being carried out on the route between MA1 and THN between 21:00 on 11/03/2017 and 06:00 on 12/03/2017.
    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.

    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 11/03/2017 21:00 - 12/03/2017 06:00
  • Last Updated - 30/03/2017 11:01
Routine facility breaker maintenance (Resolved)
  • Priority - Low
  • Affecting System - LDeX1 LV Panel Breakers
  • As part of our ongoing PPM programme, on Monday 27th February 2017 between 08.30 – 17.00 Emerson Network Power will be carrying out a routine maintenance operation on the LV panel main incomer breaker, the generator breaker and associated switchgear.

    This operation will mean that there will be a reduced level of resilience at times during the maintenance window however we don’t anticipate any disruption to your service.

    This maintenance procedure includes a visual inspection of the switchgear, service and test of the ACCB and generator breaker.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 27/02/2017 08:30 - 27/02/2017 17:00
  • Last Updated - 27/02/2017 14:13
Website builder unavailable (Resolved)
  • Priority - High
  • Affecting System - Website builder
  • We are currently investigating an issue affecting all websites hosted on our website builder service. The websites along with the editor are currently unavailable and we are looking into this as a matter of urgency.

    Update: Normal service has been restored. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 27/02/2017 09:01 - 27/02/2017 10:23
  • Last Updated - 27/02/2017 10:23
Unexpected reboot of LDeX1-VPS1 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS1
  • The LDeX1-VPS2 hypervisor has unexpectedly rebooted. They hypervisor is already backup up and running and all VMs are booting.

    Update: All but one VM is back online. The VM in question is currently carrying out a filesystem check due to the length of time since the last one.

    Update: The filesystem check on the VM has finished. All virtual servers have finished starting and normal service has been resumed. We will continue to monitor the server in case of any further issues as well as investigating what caused the unexpected reboot. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 26/02/2017 03:23 - 26/02/2017 03:27
  • Last Updated - 26/02/2017 03:38
Server unrespsonsibe (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are currently investigating why we are unable to connect to the LDeX1-cPanel1 server remotely.

    Update: We are unable to connect to the hypervisor management interface in order to access the local console on the virtual machine either. It seems that there may be an issue with the physical hardware running this virtual machine.

    Update: The physical server seems to be unresponsive, so we are rebooting it via the IPMI out-of-band management interface.

    Update: The physical server has finished rebooting and has successfully rejoined the cloud hosting environment. The virtual machine is now starting.

    Update: The virtual machine has finished booting and services are back up and running normally again. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 12/02/2017 17:57 - 12/02/2017 18:19
  • Last Updated - 12/02/2017 18:25
Network upgrades in Manchester (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 Network
  • We will be performing essential network maintenance on Wednesday 25th January starting at 1800 until approximately 2100 hours to bring a new network point of presence (PoP) and upgraded routers online. 

    At the beginning of this maintenance window our network links between LDeX2 (Media City, Manchester) and Telehouse North (London Docklands) will be taken offline, in addition our peering connections at ixManchester and our Cogent transit ports in LDeX1 (North London) will be taken offline. During the maintenance window our Manchester network will be operating with reduced redundancy with only a single network path between Manchester and London, and our entire network will be operating with reduced router redundancy as we will be operating on a single router whilst our second router is installed into our new PoP.

    At the conclusion of the maintenance window we will have expanded our network into our new PoP at Equnix MA1 (previously known as Telecity Williams) in Manchester with a state-of-the-art high performance Cisco router, bought online new transit connections from GTT in Manchester, and moved our second Cogent port from London to Manchester. 

    This maintenance is highly complex and requires core network connections and ports to be moved and reconfigured so during the window the AS41000 network should be considered at risk - we will update this notice during the maintenance window.

    Update: Our Cogent transit connection in LDeX1 and our IXManchester peering connection in LDeX2 have both been shut down. The link between THN and LDeX2 has also been disabled ready for the equipment to be installed in Equinix MA1 and inserted into the ring.

    Update: We have completed the installation of the equipment in the new PoP at Equinix MA1 and redundancy has been restored to the ring, with MA1 now sitting between THN and LDeX2.

    Our IXManchester peering connection is back online, however we have encountered issues with both our new Cogent and GTT transit connections in MA1. We are currently working with both providers to try and get these connections online as quickly as possible. Unfortunately this means that in the mean time we are operating with reduced transit redundancy.

    Update: Our Cogent transit connection is now fully operational. We are working with GTT to resolve an on-going issue that prevents that new transit port being bought into service. Now that our Cogent transit is online our network is now fully redundant once again as we have both routers now able to handle traffic on-ward to the internet.

    Update: Out GTT transit connection in MA1 is now online and working. This completes the full range of peering and transit connections.

  • Date - 25/01/2017 18:30 - 26/01/2016 21:00
  • Last Updated - 08/02/2017 11:53
Backup server unavailable (Resolved)
  • Priority - High
  • Affecting System - Backups
  • We have become aware of an issue affecting the KSP810-Back2 backup server which provides backups for hosting customers and managed customers. Currently the web interface for the backup software is unavailable and we don't believe any new backups are being taken.

    Update: Whilst the server is responding to ICMP ping requests, we are unable to gain access to the server remotely via SSH as the connection is actively being refused. The local console for the server is showing I/O errors and we are unable to get to the login prompt, so we have been forced to perform an emergency reboot of the server via the IPMI controller.

    Update: The server is refusing to boot as it doesn't seem to be detecting the RAID card. We are going to attempt a hard power cycle of the server chassis by having the data centre remote hands technicians physically unplug the power cables.

    Update: The server is still unable to detect the RAID card after removing the power cables, so we have asked the data centre remote hands technicians to open the server up, inspect the RAID card for any obvious physical problems and re-seat it. 

    Update: The RAID card has been re-seated, but unfortunately is still not being detected by the server so it seems likely that this is a hardware failure. We have sent what little diagnostic data we have over to the vendor for them to investigate and will be looking at options for replacing the RAID card and re-importing the RAID array. 

    Update: The vendor have suggested disconnecting the backplane from the RAID card to see if it shows up then, but this was unsuccessful. They have therefore requested that we remove the RAID card and ship it back to them for them to test and replace.

    Unfortunately, we weren't aware that the advanced replacement cover on the RAID card is only for the first 2 years of the 3 year warranty and has expired, so we will need to wait for them to receive the failed RAID card before shipping us a new one. We also don't have a spare 16-port RAID card in stock, only 8-port RAID cards and our supplier advises that new 16-port cards are on back-order with the vendor with an ETA of early December.

    In order to resume taking backups as soon as possible, we have therefore decided to bring forward the planned introduction of a new backup server which we recently purchased.

    We will still endeavour to replace the failed RAID card and have the old backup server accessible again as soon as possible.

    Update: The new backup server has been installed in LDeX2 and is currently being configured. We hope to be able to resume taking backups later today once this configuration work is complete.

    Update: We have completed the configuration of the backup tasks for the first few servers and the backup jobs are now running. We will continue setting up the remaining servers.

    Update: The first backup task has completed. We are still setting up backups for some servers.

    Update: We have finished setting up backups for all servers. Most servers have completed their initial backup and are now running hourly differential backups as normal. The remaining servers should finish their initial backups in the next few hours.

    Update: We have received the replacement RAID card and installed it into the old backup server. After performing a firmware update it has now successfully imported the RAID array and we have been able to boot the server normally. All legacy data contained on the old backup server is therefore accessible once again.

    Update: As we now have more than a full cycle of backups on the new backup server and any restoration tasks from the old backup server have been completed, we have decommissioned the old backup server permanently.

     

  • Date - 14/11/2016 16:00 - 20/11/2016 12:00
  • Last Updated - 05/01/2017 19:09
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that our connection between LDeX2 and THN will be at-risk due to fibre splicing work being carried out on the route between Equinix MA1 (formerly Telecity Williams House and Kilburn House) and Telehouse North (THN) between 22:00 on 11/12/2016 and 05:00 on 12/12/2016.

    This work should not directly affect any of the circuits which make up our backhauk provider's network (and thus our network), however due to the close proximity of the work to fibres carrying our traffic there is a small risk that there may be some unintended disruption.
    Our network is designed to withstand the loss of one circuit and continue to function as normal, so if there is any disruption to this circuit then it should not be service affecting, however there may however be a brief period of increased latency or packet loss whilst the network reconverges. 

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work has been completed without impact

  • Date - 11/12/2016 22:00 - 12/12/2016 05:00
  • Last Updated - 05/01/2017 18:22
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that their connection between LDeX1 and LDeX2 will be offline for up to 90 minutes between 23:00 on 02/12/2016 and 03:00 on 03/12/2016 due to their supplier swapping out a series of line cards as part of preventative maintenance. Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.

    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the maintenance work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work has been completed without impact

  • Date - 02/12/2016 23:00 - 03/12/2016 03:00
  • Last Updated - 05/01/2017 18:22
TMA01/Japetus outage (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We have become aware of a problem with our Plesk Windows envionment hosted on TMA01/Japetus, the server is currently offline and we're unable to regain connection currently to diagnose. Data centre staff are en route to the server to reboot the hardware in person. We will update this status once more information is known and service has been restored.

    Update 18:11 - The server is now back up and running. The host hardware had crashed and was only recoverable with a hard reboot of the server, we will investigate root cause to ensure on-going stablity. Customers continuing to experience problems should contact support for assistance.

  • Date - 05/01/2017 17:15 - 05/01/2017 18:11
  • Last Updated - 05/01/2017 18:13
Core router replacement in Telehouse North (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • On Saturday 26th November starting at 10:00am we will be upgrading our THN-RT1 core router located in Telehouse North in London with new hardware. This upgrade will involve us shutting down the old router as well as peering and transit connections that are connected to that router. In addition this will break the backhaul ring between LDeX1, LDeX2, and THN so our AS41000 network will be operating with reduced backhaul redundancy.

    Once the new router is installed we will also be taking the opportunity to move our Tier 1 transit connections from our LDeX1-RT1 router to the new THN-RT1 router, at the same time we will also bring new Tier 1 transit connectivity online with a third provider.

    Furthermore with the new router installed this will allow us to bring online new peering connections to LONAP and increase our bandwidth to our existing transit providers.

    Due to the nature of the work being undertaken the network should be considered at risk and some disruption to connectivity is possible for short periods throughout the maintenance window.

    Update: The new router was successfully installed and is carrying production traffic. Full redundancy and capacity has been restored to the network.

  • Date - 26/11/2016 10:00 - 26/11/2016 12:00
  • Last Updated - 28/11/2016 16:03
Maintenance for Freethought cloud environment (Resolved)
  • Priority - Low
  • Affecting System - Cloud
  • We will be carrying out essential maintennace to the Freethought cloud environment used to host our shared, reseller, and website builder products. This maintenance is to install a number of firmware and software updates to the switches, shared storage, and the hypervisor installed on each node. In addition we will be making configuration changes to improve performance and capacity of the storage network.

    The maintenace should not be seriously service affecting however there will be brief moments of inaccessibility whilst servers migrate from one hardware node to another during the maintenance period. There may also be brief periods of packet loss whilst network traffic migrates from the active switch to the redundant switch whilst firmware is updated and switches rebooted. During the maintenance window services hosted within the cloud environment should be considered at risk.

    The following servers will be affected:

    • LDeX1-cPanel1
    • LDeX1-cPanel2
    • LDeX1-cPanel3
    • LDeX1-cPanel4
    • LDeX1-Plesk1
    • Website builder platform
    • LDeX1-WBMail1 (Zimbra mail server)
    • Customers with managed cloud servers
    Update: The switch firwmare and hypevisor software updates have been completed sucesfully, however the multipath storage tweaks were only partially completed due to some unforeseen complications and the SAN firmware update was aborted due to overrunning the maintenance window.

    As such, we will be working with the vendors to address the issues that we encountered last night and then scheduling a further maintenance window in order to complete the remaining tasks. We will let you know once we have a date and time for this.

    In the mean time, the cloud server environment is operating normally with full redundancy. There is no adverse impact from any of this incomplete work.

    Update: The storage vendor has provided us with steps which they believe will resolve the issues that we encountered, so we have scheduled a second maintenance window for 23/07/2016 in order to carry out the remaining multipathing configuration tweaks as well as the SAN firmware update which we were unable to complete during the first maintenance window.

    This maintenance window will be from 21:00 on 23/07/2016 to 03:00 on 24/07/2016 and the impact should be the same as last time - brief disruption whilst we live migrate virtual machines between physical host nodes.

    Update: Unfortunately, we encountered some issues with the hypervisors whilst carrying out the maintenance work last night. These are unrelated to the previous issues and we were able to overcome them, but they slowed our progress and so we were unable to complete all of the necessary work within the scheduled maintenance window. As such, we will need to arrange a further maintenance window in order to complete the remaining maintenance work.

  • Date - 23/07/2016 21:00 - 24/07/2016 03:00
  • Last Updated - 24/11/2016 09:10
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • The supplier of our network links between LDeX1 and Telehouse North is repairing some damaged fibre at Cricklewood on 18th October 2016 between midnight and 6am - during the maintenance window there will be reduced connectivity and resilience between LDEX1 and Telehouse North (THN) data centres and the Freethought AS41000 network should be considered at risk.

    There may be brief moments of packet loss or increased latency during the maintenance window whilst our network reconverges upon restoration and loss of specific links.

  • Date - 18/10/2016 00:01 - 18/10/2016 06:01
  • Last Updated - 24/11/2016 09:10
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We are currently performing an emergency reboot of the TMA01/Japetus server as the IIS web server service has stopped and refuses to start again.

    Update: The TMA01/Japetus server is back online and working normally again. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 20/11/2016 21:27 - 20/11/2016 21:46
  • Last Updated - 20/11/2016 21:49
Virgin Media customers experiencing issues. (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • We are aware that customers of Virgin Media are currently experiencing problems accessing our network. We're investigating the problem and will update once we know more.

    Update (10:14): We have determined that another network operator is mistakenly announcing Virgin Media IP addresses into the LINX route servers where we peer with a wide range of networks. This is causing return traffic destined to Virgin Media to instead go to this other network. We have put measures in place to route around this issue as a temporary fix, Virgin Media customers should now be able to access our network.

    Update (11:34): LINX have informed us that they have temporarily disabled the connection to the LINX route servers for the offending network operator. We will keep our own temporary mitigations in place until such time as we have received confirmation that this issue has been permanently resolved.

    Update (13:47): LINX have confirmed that the offending network operator has resolved the problem and LINX have re-enabled their connections to the LINX router servers. We have removed our temporary mitigation and confirmed that Virgin Media IP addresses are still reachable.
    Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any connectivity issues.

  • Date - 10/11/2016 09:50 - 10/11/2016 13:47
  • Last Updated - 14/11/2016 20:34
Unexpected reboot of LDeX1-VPS3 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS3
  • Whilst attempting to investigate an issue with our management access to the hypervisor on the LDeX1-VPS3 node, it has unexpectedly rebooted. All virtual servers hosted on LDeX1-VPS3 will be offline at the moment.

    Update: The LDeX1-VPS3 hypervisor has finsihed booting and is back online. Individual virtual servers are currently starting up and should be back online shortly.

    Update: All of the virtual servers hosted on the LDeX1-VPS node have sucesfully finished booting and normal service has been resumed. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 06/11/2016 11:24 - 06/11/2016 11:32
  • Last Updated - 06/11/2016 11:44
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that our connection between LDeX2 and THN will be impacted by fibre splicing work being carried out between Equinix MA1 (formerly Telecity Williams House and Kilburn House) and Equinix MA2 (formerly Telecity Reynolds House) between 18:00 on 19/09/2016 and 07:30 on 20/09/2016. Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.

    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the miantenace work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work has been cancelled and will be rescheduled in 2017.

  • Date - 19/09/2016 18:00 - 20/09/2016 07:30
  • Last Updated - 29/09/2016 13:53
Backhaul network fibre cut (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that they are currently experiencing an outage on one of their 10G wavelengths between London and Manchester.

    Our network is designed to withstand the loss of one circuit and continue to function as normal, so this backhaul issue is not service affecting, however it does mean that the network should be considered "at risk" until the connection is back online and full redundancy is restored.

    If you have any questions or concerns about the impact of this backhaul issue, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: Our backhaul provider has informed us that the underlying dark fibre supplier (Zayo) have discovered a fibre break between Stoke and Wolverhampton and a repair team has been dispatched.

    Update: The engineers have arrived at Zayo's Hanchurch PoP and are setting up to test south the fibre leg heading towards Shareshill where engineers are scheduled to arrive at 14:00.

    Update: The tests have shown that there is a break 20.9km south of Hanchurch. Engineers are en route to survey the area on foot, however this is rural land and the fibre chambers are each 2.2km apart, so it may take some time to locate the damage.

    Update: The engineers have located the fibre break and are currently making preparations for a temporary repair. As the break is 600m from the nearest road, all equipment is being carried to the break and so this is slowing down the repair efforts.

    Update: The fibre break has been repaired and the affected 10G wavelength circuit is back up, so our network is now fully redundant again. Our backhaul network supplier are awaiting an RFO from Zayo to confirm what caused the damage as well as whether this is a temporary or permanent repair.

  • Date - 28/09/2016 10:52 - 29/09/2016 02:37
  • Last Updated - 29/09/2016 12:49
Routine reboot of LDeX1-Plesk1 (Resolved)
  • Priority - Low
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a routine reboot of LDeX1-Plesk1 at 21:00 on 14th September 2016. This reboot will result in less than 10 minutes of down time and is to resolve issues with backup software that can only be resolved by a complete system reboot.

    Update (21:16): The reboot of LDeX1-Plesk1 has completed successfully.

  • Date - 14/09/2016 21:00 - 14/09/2016 21:10
  • Last Updated - 14/09/2016 21:16
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX2 UPS
  • We have been notified by London Data eXchange that they will be carrying out work on the UPS in the LDeX2 facility between 09:00 and 17:00 on 25/08/2016 as part of their planned preventative maintenance programme.

    This work will involve a firmware update, visual inspection of the components in each of the UPS systems, functional testing and cleaning of the fans and as such the LDeX2 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX2 should be considered "at-risk" for the duration of this work.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously. During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode. This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed. Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 25/08/2016 09:00 - 25/08/2016 17:00
  • Last Updated - 25/08/2016 17:36
Unexpected reboot of LDeX1-VPS2 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS2
  • The LDeX1-VPS2 hypervisor has unexpectedly rebooted. They hypervisor is already backup up and running and all VMs are booting.

    Update: All but one VM is back online. The VM in question is currently carrying out a filesystem check due to the length of time since the last one.

    Update: The filesystem check on the VM has finished. All virtual servers have finished starting and normal service has been resumed. We will continue to monitor the server in case of any further issues as well as investigating what caused the unexpected reboot. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 07/08/2016 12:01 - 07/08/2016 12:03
  • Last Updated - 07/08/2016 12:27
Backhaul network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our backhaul network supplier has informed us that our connection between LDeX1 and LDeX2 will be impacted by highway maintenace work being carried out on the M1 motorway between 06:45 and 19:15 on 31/07/2016. Our network is designed to withstand the loss of one circuit and continue to function as normal, so this maintenance work should not be service affecting.

    Whilst we do not anticipate a complete loss of connectivity to any part of our network during this maintenance window, there may however be a brief period of increased latency or packet loss at the start and end of the miantenace work whilst the network reconverges. As with all network maintenance of this nature, the network should be considered at risk for the disruption of the maintenance window.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: Our backhaul connectivity supplier has confirmed that this maintenance work has been completed sucesfully without any impact on service. Unfortunately the restoration of the service was delayed until 21:00 on 31/07/2016 due to a faulty network card in the WDM platform causing a large number of alarms which took several hours to investigate. Restarting the card in question cleared the alarms and restored the affected wavelength.

  • Date - 31/07/2016 06:45 - 31/07/2016 19:15
  • Last Updated - 04/08/2016 23:08
Backup server maintenance (Resolved)
  • Priority - Low
  • Affecting System - Backups
  • We will be performing scheduled maintenance of our backup system on 15/07/2016 starting at 11:00. This will primarily consist of updating the R1Soft CDP software to the latest version, during which time no new backups will be taken and no restores can take place.

    This update requires the on-disk format of the backup disk safes to be updated and so due to the volume of data contained within these disks safes the update may take some time to complete.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has begun and we have blocked any new tasks from being scheduled. We are just waiting for existing tasks to complete before proceeding with the upgrade.

    Update: All scheduled tasks have completed successfully, so we will now being the upgrade.

    Update: Just over a third of the disk safes have been converted to the new format so far.

    Update: Two thirds of the disk safes have now been converted to the new format.

    Update: We are still waiting for the conversion of the last few disk safes to complete. These disk safes are all quite large and so are taking some time to convert.

    Update: We are now down to converting the last four disk safes

    Update: We are still waiting for the last four disk safes to finish converting. We can see heavy I/O on the disk array as well as high CPU usage on the server, so we know that the disk safe conversion is still in progress, but unfortunately we don't have any indication of progress or time remaining.

    Update: Due to the length of time that the conversion of the last four disk safes seems to be taking compared to all of the other disk safes on the backup server, we have decided to restart the backup server service in case we have run into a bug which has caused the conversion process to become stuck.

    Update: Unfortunately, due to the length of time that the conversion of the last four disk safes is taking, we have been forced to abandon the conversion process and create new disk safes. This means that we will be unable to restore any data contained within these four disk safes. Given the difficult decision, we feel that it is more important that these servers are backing up again so that fresh data is protected.

    Update: The fresh backups of the four servers whose disk safes we've had to recreate have finished. We are checking that all other backups are running correctly.

    Update: All but one server is backing up correctly. The issue with that server is unrelated to the backup server upgrade (kernel module update required).

    Update: All servers are now backing up correctly.

  • Date - 15/07/2016 11:00 - 30/07/2016 02:26
  • Last Updated - 30/07/2016 14:00
Leased line outage (Resolved)
  • Priority - Critical
  • Affecting System - Leased line customers
  • Our interconnect with one of our leased line wholesale providers in Telehouse North has gone physically down. Any leased lines using this provider will currently be unavailable. We are investigating the cause of this.

    Update: The wholesale provider have confirmed that they are investigating a major incident.

    Update: The LINX and LONAP looking glasses both show a number of BGP sessions dropping at approximately the same time, so there may be a wider issue.

    Update: The wholesale provider have confirmed that there is a power outage affecting their equipment in the TFM10 suite in Telehouse North. They are working with the facility operator to restore power and have also dispatched their own engineers.

    Update: We are seeing reports from several customers of users struggling to access services that we host. All of our equipment is online and working normally, however several ISPs including BT are affected by the power issues in Telehouse North and so access to some destinations is disrupted. If you are having issues accessing our services then please get in touch with our helpdesk in the usual manner, however it is likely that you will need to speak to your own ISP.
    This is unrelated to the power issues in Equlnix LD8 (formerly Telecity Harbour Exchange) which affected BT yesterday.

    Update: We have forced BT/Plusnet traffic via a different upstream provider as we suspect that there is significant congestion at the BT end. This change was made at approximately 10:35 and initial reports seem to indicate that this has alleviated the problem for BT/Plusnet customers.

    Update: Our interconnect with the leased line wholesale provider has come back up at 14:57 and all affected circuits are back up and running again. We have not yet received an official all-clear, so services should still be considered at-risk.

    Update: All leased line services delivered via the interconnect in question have remained online and stable since power was restored by Telehouse on Thursday afternoon, however we are still waiting for full details of exactly what happened and whether it has been fully resolved.

    Update: We have received a copy of the Telehouse RFO and it seems that the south side of the 3rd floor in Telehouse North lost power at 07:43 on 21/07/2016 due to a fault in the electrical distribution infrastructure.
    The faulty components were isolated and power was restored to some of the affected Telehouse customers at approximately 10:20. Terminal blocks, circuit breakers and associated cabling were then replaced and power was restored to the remainder of the affected portion of the 3rd floor by 13:40.
    The wholesale leased line provider who was affected by this outage then began powering their equipment back up and found that they had to replace 3x IOM cards in their Nokia (formerly Alcatel Lucent) network equipment which had failed. This was completed and our interconnect service was restored at 14:57.
    Telehouse are conducting a RCA (Root Cause Analysis) of the fault and we expect to receive a copy of their findings in due course, however it may take some time for them to produce as they will likely have to have the failed components forensically analysed. If you wish to receive a copy of this RCA once it is available, please get in touch with our helpdesk in the usual manner.
    In the mean time, power in Telehouse North has remained stable and we do not have any reason to believe that there is a risk of this problem reoccurring. Please accept our apologies for the inconvenience caused by this. We will be getting in touch with individual leased line customers in order to arrange the appropriate SLA credits.

  • Date - 21/07/2016 07:43 - 21/07/2016 14:57
  • Last Updated - 28/07/2016 22:33
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 14/07/2016. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 3 minutes across two reboots. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 14/07/2016 21:00 - 14/07/2016 23:59
  • Last Updated - 14/07/2016 23:04
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 28/05/2015. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 22 minutes across one reboot. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 27/07/2015 21:00 - 27/07/2015 23:59
  • Last Updated - 14/07/2016 21:43
IXManchester peering (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • Our connection to the IXManchester exchange has unexpectedly dropped. Any traffic arriving or leaving the AS41000 network via IXManchester will have briefly been interrupted, however all traffic was automatically re-routed to alternative connections so the disruption should have been minimal.

    We are investigating what caused this issue, however the AS41000 is still fully redundant via alternative paths in the mean time.

    Update: The network interface connecting our router to the IXManchester exchange is physically flapping. We have temporarily disabled all BGP sessions to peers over the IXManchester exchange whilst we investigate what is causing this.

    Update: The cabling has been checked and replaced and the interface is now up. We are monitoring to ensure that it is stable before re-enabling the BGP sessions with our peers over the IXManchester exchange.

    Update: The connection has been up and stable for 3 hours, so we have re-enabled the BGP sessions with our peers over the IXManchester exchange and traffic is flowing as normal again.
    We will continue to monitor our connection to the IXManchester exchange in order to ensure that it remains stable, however we believe that the issue has been resolved at this time. Please don't hesitate to contact our helpdesk in the usual manner if you are experiencing any problems and please accept our apologies for the inconvenience.

  • Date - 07/07/2016 16:58 - 07/07/2016 20:32
  • Last Updated - 07/07/2016 21:49
Routine facility breaker maintenance (Resolved)
  • Priority - Low
  • Affecting System - LDeX1 LV Panel Breakers
  • As part of our ongoing PPM programme, on Monday 20th June 2016 between 08.30 – 17.00 Emerson Network Power will be carrying out a routine maintenance operation on the LV panel main incomer breaker and the generator breaker and associated switchgear.

    This operation will mean that there will be a reduced level of resilience at times during the maintenance window however we don’t anticipate any disruption to your service.

    This maintenance procedure includes a visual inspection of the switchgear, service and test of the ACCB and generator breaker.

  • Date - 20/06/2016 08:30 - 20/06/2016 17:00
  • Last Updated - 02/07/2016 23:52
Backhaul network maintenance (Resolved)
  • Priority - Low
  • Affecting System - AS41000 network
  • Our backhaul supplier has informed us that they are carrying out essential maintenance on network equipment that may impact our backhaul connection between LDeX2 (Manchester) and Telehouse North (London, Docklands). This maintenance work is to install new module and replace a faulty module in core network switches and may cause a brief interuption of network connectivity to links using the affected network equipment.

    Freethought has redundant links between Manchester and London so the work on one of our links should not be service affecting, however the network should be considered at risk whilst this work is carried out.

    Update: Our backhaul connectivity supplier has confirmed that this maintenance work has been completed sucesfully without any impact on service.

  • Date - 06/06/2016 22:00 - 07/06/2016 02:00
  • Last Updated - 07/06/2016 09:04
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 28/05/2016. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

  • Date - 28/05/2016 21:00 - 28/05/2016 23:59
  • Last Updated - 02/06/2016 14:54
Network maintenance (Manchester) (Resolved)
  • Priority - Low
  • Affecting Other - AS41000 Network
  • Our backhaul supplier will be performing essential maintenance work to their core network equipment during the early hours of the 27th May. This maintenance involves rebooting four network devices in turn with a 30 minutes gap between each reboot. Our backhaul links between Manchester and London go two different paths via different switches so only one of our backhaul links should be impacted at any one time. 

    Our network is designed to withstand the loss of one circuit so this maintenance should not be service affecting, during this window we do not anticipate a complete loss of connectivity to any part of our network however there may be short moments of increased latency or packet loss. As with all network maintenance of this nature the network should be considered at risk of disruption.

  • Date - 27/05/2016 00:01 - 27/05/2016 04:00
  • Last Updated - 02/06/2016 14:54
Network Maintenance (Telehouse North) (Resolved)
  • Priority - Low
  • Affecting Other - AS41000 Network
  • Our backhaul supplier will be performing essential maintenance work to their core network equipment located in Telehouse North (London Docklands) during the early hours of the 26th May. This maintenance involves rebooting two network devices in turn with a 30 minutes gap between each reboot. Each of these switches terminate one of our links to either Manchester or our LDeX1 PoP, as each switch is rebooted we will loose only one of the links at any one time.

    Our network is designed to withstand the loss of one circuit so this maintenance should not be service affecting and during this window we do not anticipate a complete loss of connectivity to any part of our network however there may be short moments of increased latency or packet loss. As with all network maintenance of this nature the network should be considered at risk of disruption.

    Update: This maintenance work has been completed successfully.

  • Date - 26/05/2016 00:01 - 26/05/2016 04:00
  • Last Updated - 26/05/2016 01:32
DDoS attack against AS41000 (Resolved)
  • Priority - High
  • Affecting Other - AS41000 Network
  • We are currently experiencing a DDoS attack against our network which is causing some customers to have difficulty access services we provide. We are working to identify and mitigate the situation as quickly as possible and will update this status message as we learn more.

    Update (1854): The attack has been mitigated the network is now recovering, services hosted on the network should now be accessible. We will continue to monitor the network for the next few hours to ensure continued availability and to mitigate any additional attacks.

    Update (20/05/2016 @ 0922): The network has remained stable overnight, we have identified the target of the attack and are working with them to prevent any reoccurance of this problem. Once again we apologise for the inconvenience. 

  • Date - 19/05/2016 17:50 - 19/05/2016 18:54
  • Last Updated - 20/05/2016 09:23
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk3
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk3 server from 5.5 to 10.0 on 12/05/2016 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The update has been completed and MySQL/MariaDB is back online again.

  • Date - 12/05/2016 21:00 - 12/05/2016 23:59
  • Last Updated - 12/05/2016 21:24
Essential network maintenance (Resolved)
  • Priority - Medium
  • Affecting Other - Network
  • In conjunction with our suppliers we are performing essential network maintenance on 11th May starting at 10pm and continuing for up to 4 hours. This maintenance is to improve the reliability and stability of our national links connecting London to Manchester as well as performing other essential minor network related tasks and config changes to our core network.

    During the maintenance window our network will at times be operating with reduced redundancy both in terms of our network links between Manchester and London, and our links to transit and peering. Whilst a complete loss of network connectivity is unlikely the entire Freethought network should be considered at risk for the duration of the maintenance window.

    Update 1: This maintenance has proven more disruptive than planned with periods of complete loss of connectivity. We apologise for the inconvenience whilst we complete this essential maintenance work.

    Update 2: Our network maintenance changes have been completed and the network appears to be stable again. We are continuing to monitor the situation and complete minor remaining tasks.

    Update 3: The maintenance is now complete, once again we apologise for any inconvenience caused by the loss of connectivity to our network earlier. The maintenance work is now complete and should mean for greatly improved reliability and stability of our network going forward, particularly in our Manchester PoP. This issue is now considered resolved, any further problems should be reported via the usual support channels.

  • Date - 11/05/2016 22:00 - 12/05/2016 00:54
  • Last Updated - 12/05/2016 00:54
Unexpected reboot of LDeX1-VPS3 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS3
  • Whilst investigating an issue with our management access to the hypervisor on the LDeX1-VPS3 node, it has unexpectedly rebooted. All virtual servers hosted on LDeX1-VPS3 will be offline at the moment.

    Update (10:53): The LDeX1-VPS3 hypervisor is back online and individual virtual servers are currently starting.

    Update (10:55): All virtual servers have finished starting and normal service has been resumed. We will continue to monitor the server in case of any further issues as well as investigating what caused the unexpected reboot. Please accept our apologies for the inconvenience caused and feel free to contact our support staff via the usual means if you are still experiencing any problems.

  • Date - 08/05/2016 10:47 - 08/05/2016 10:55
  • Last Updated - 08/05/2016 10:59
MariaDB (MySQL) upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be upgrading the version of MariaDB on the LDeX1-Plesk1 server from 5.5 to 10.0 on 05/05/2016 between 21:00 and 23:59. We expect the upgrade to take around 5-10 minutes to complete.

    We will need to stop the MariaDB service for the duration of the upgrade, so this will affect any scripts depending on the MariaDB/MySQL service. The Plesk control panel will also be affected.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The update has been completed and MySQL/MariaDB is back online again.

  • Date - 05/05/2016 21:00 - 05/05/2016 23:59
  • Last Updated - 05/05/2016 21:12
Network connectivity issues. (Resolved)
  • Priority - Critical
  • Affecting Other - Network
  • We have been alerted to a network accessiblity problem with our AS41000 network, we are currently investigating and will update this status as soon as more information is available.

    Update (16:14): Some of our backhaul links have gone offline resulting in a loss of connectivity, the links appear to have been restored and our network is now recovering.

    Update (16:25): Access should be restored to the majority of services hosted in our London data centre LDeX1. We are still waiting on the network to recover in Manchester and London Docklands.

    Update (16:31): Normal service has now been restored to the entire network and to all customers in London and Manchester. We will leave this network status issue open for a short while longer whilst we monitor the stability of the network.

    Update (17:01): Our network remains stable following the earlier outage, the root cause of this problem was one our backhaul links to London Docklands going offline due to a DDoS attack against our suppliers network. This was exacerbated by a subsequent crash of some of the software processes on one of our routers causing a wider network stability problem. Normal service has been restored and as the network remains stable this issue has been closed.

  • Date - 05/05/2016 16:10 - 05/05/2016 16:31
  • Last Updated - 05/05/2016 17:03
Reboot to apply registry changes (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We have identified a problem affecting MySQL database connections from the Plesk control panel which requires two registry changes to fix. In order for these chandes to take effect, the TMA01/Japetus server needs to be rebooted.

    This has been scheduled to take place this evening (28/04/2016) between 21:00 and 23:59 and should hopefully take around 10 minutes to complete. We will also apply any outstanding Windows patches before this reboot so as to avoid requiring another reboot in the near future.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This reboot has been completed sucesfully.

  • Date - 28/04/2016 21:00 - 28/04/2016 23:59
  • Last Updated - 28/04/2016 23:13
Emergency reboot of LDeX1-VPS3 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS3
  • The hypervisor on the LDeX1-VPS3 node seems to have suffered similar problems to the LDeX1-VPS4 node following a routine software update and so we are also rebooting this node.

    Again, this is being performed via a controlled shut down of the host node and all running virtual machines.

    Update: The reboot has been completed successfully and all virtual machines are back online. The original problem with the hypervisor seems to have cleared, however we will continue to monitor the server for any signs of further issues. Please accept our apologies for the inconvenience caused.

  • Date - 18/04/2016 08:06 - 18/04/2016 08:14
  • Last Updated - 18/04/2016 08:18
Emergency reboot of LDeX1-VPS4 (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS4
  • We are currently carrying out an emergency reboot of the LDeX1-VPS4 node due to problems detected with the hypervisor which are affecting some virtual machines.

    This is being performed via a controlled shut down of the host node and all running virtual machines.

    Update: The reboot has been completed successfully and all virtual machines are back online. The original problem with the hypervisor seems to have cleared, however we will continue to monitor the server for any signs of further issues. Please accept our apologies for the inconvenience caused.

     

  • Date - 17/04/2016 22:30 - 17/04/2016 22:40
  • Last Updated - 17/04/2016 22:42
Maintenance to upgrade TMA01/Japetus (Resolved)
  • Priority - Low
  • Affecting Server - TMA01/Japetus
  • We are currently performing essential maintenance on TMA01/Japetus to migrate the operating system to a new virtualised server to facilitate an eventual migration of hosted websites into the Freethought cloud platform.

    Update (20-03-16 0100): The conversion of the server to a new virtualised environment has been successful and websites are back online. We will now begin the update from Plesk 9.5.4 to Plesk 11.5

    Update (20-03-16 0350): The update to Plesk has partially completed and some websites are back online, the update has yet to fully complete however and a large file system permissions task has yet to complete. This task is likely to take some time.

    Update (20-03-16 0900): The file system permissions task is still running. We have identified that some websites are failing to load due to a "500 Internal Server Error" - this is caused by the PHP module not being present because the Plesk update process has yet to install it. We are unable to investigate or resolve this issue until the Plesk update completes as it will most likely resolve the issue as part of the update.

    Update (20-03-16 2100): The file system permissions task is still running, we have no estimated time to complete. Unfortunately some websites are still offline due to the PHP library errors mention in the previous update. We are unable to intervene manually as to do so could cause data loss, at this point we have no choice but to allow the update to complete.

    Update (21-03-16 0825): The file system permission task has finally completed. Additional tasks as part of the update are now completing. We have managed to restore a small number of websites experiencing issues manually however those experiencing PHP related errors are still offline with no estimated time to resolution.

    Update (21-03-16 0904): The update process has installed the PHP engine which appears to have resolved issues with PHP that was preventing some websites from loading. We are currently verifying if any websites are still experiencing problems.

    Update (21-03-16 0912): The installation of the PHP engine appears to have resolved the issues with all the websites that were previously not loading. The update is still running but it does not appear to be service affecting any longer.

    Update (21-03-16 0941): The Plesk update has completed! If you are having any continuing issues with websites or email please contact support by emailing support@freethought-internet.co.uk or calling 03300 882130.

  • Date - 19/03/2016 21:00 - 21/03/2016 09:41
  • Last Updated - 21/03/2016 09:42
LDeX1-VPS3 rebooted. (Resolved)
  • Priority - Critical
  • We have become aware of an unexpected reboot of the VPS3 node. We are currently investigating and will update this status when we know more.

    Update (12:05): The node has now completed booting and individualy virtual servers are booting back up.

    Update (12:09): All virtual servers have completed booting. We will investigate the root cause of the unexpected reboot and take steps to ensure it doesn't happen again. If you continue to experience issues with any virtual servers hosted on this node please contact support.

  • Date - 07/03/2016 12:00
  • Last Updated - 07/03/2016 12:10
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 in LDeX1
  • We have been notified by our backhaul connectivity provider that they will be carrying out scheduled maintenance work on the devices which terminate our connections in LDeX1 between 00:01 and 04:00 on 27/02/2016.

    During this period they will need to reboot each device in turn, which will result in a loss of service for approximately 5-10 minutes. These reboots will be at least 1 hour apart in order to minimise disruption and ensure networks stability.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance, however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

  • Date - 27/02/2016 00:01 - 27/03/2016 00:38
  • Last Updated - 11/02/2016 18:23
SMTP service stopped (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • The login credentials for a client's email account hosted on the TMA01/Japetus server has been compromised and used to send a large volume of space, which has resulted in a very large mail queue on this server. We have therefore temporarily stopped the SMTP service on TMA01/Japetus whilst we clean out the queue. All inbound and outbound SMTP connections will currently be failing as a result of this.

    Update: The mail server's outgoing queue has been cleaned and the SMTP service has been started back up again. Please accept our apologies for the inconvenience and don't hesitate to get in touch if you are still experiencing any issues.

  • Date - 08/02/2016 17:50 - 08/02/2016 20:21
  • Last Updated - 08/02/2016 20:21
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 UPS
  • We have been notified by London Data eXchange that they will be carrying out work on the UPS in the LDeX1 facility between 09:00 and 17:00 on 08/02/2016 as part of their planned preventative maintenance programme.

    This work will involve a firmware update, visual inspection of the components in each of the UPS systems, functional testing and cleaning of the fans and as such the LDeX1 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX1 should be considered "at-risk" for the duration of this work.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously. During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode. This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed. Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance has been sucesfully completed without any impact to service.

  • Date - 08/02/2016 09:00 - 08/02/0216 17:00
  • Last Updated - 08/02/2016 16:18
Network disruption (Resolved)
  • Priority - High
  • Affecting System - AS41000
  • We are currently experiencing difficulty with one of our routers located in Telehouse North (THN), whilst our network adapted to the loss of this router there will have been some instability and loss of service starting at 09:01am.

    Update: The router in THN has started responding again and appears to have gone offline due to a software failure.

    Update: Some customers will have seen service return around 09:11 and full service was restored by 09:26. Please accept our sincere apologies for the inconvenience caused by this network issue and please don't hesitate to let us know if you are still experiencing any problems or if you have any questions.

  • Date - 05/01/2016 09:01 - 05/01/2016 09:26
  • Last Updated - 05/01/2016 11:20
Network disruption (Resolved)
  • Priority - High
  • Affecting System - LDeX2
  • We have observed brief periods of disruption on our backhaul connctivity to LDeX2. We have sked our supplier to investigate this. Customers in LDeX2 may have noticed some packet loss whilst traffic was re-routed.

    Update: This seems to have stopped around 06:44, however we are continuing to closely monitor the network in case of any further problems and await an explenation from our supplier.

  • Date - 26/12/2015 06:22 - 26/12/2015 06:44
  • Last Updated - 05/01/2016 09:17
Routine facility maintenance work (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 data centre utility mains and generator LV breakers
  • We have been notified by London Data eXchange that they will be carrying out work on the LV breakers for both the main incoming utility mains feed and for the backup diesel generator feed in the LDeX1 facility between 08:30 and 17:00 on 14/12/2015 as part of their planned preventative maintenance programme.

    This work will involve visual inspection, servicing and testing of the breakers and as such the LDeX1 facility may be operating as a level of reduced redundancy whilst this work is being carried out, so all services hosted in LDeX1 should be considered "at-risk" for the duration of this work.

    Generator power may be unavailable for periods during this work, however no work is being carried out on the utility mains power infrastructure whilst the generator is unavailable, so one power source should always be available to power the facility. The three facility UPS have approximately 45 minutes of battery run time at the current load.

    All racks will continue to be fed from two of the facility UPS via independent A+B feeds throughout the duration of the maintenance work. All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: LDeX have sucesfully completed this maintenance work without incident.

  • Date - 14/12/2015 08:30 - 14/12/2015 17:00
  • Last Updated - 14/12/2015 21:29
Network maintenance work (Resolved)
  • Priority - High
  • Affecting System - AS41000 Network
  • We will be undertaking a period of maintenance work from 23:00 on Saturday 12/12/2015 to 03:00 on Sunday 13/12/2015 for the final part of the planned remedial works following the recent network problems.

    We do not expect any significant impact on service availability during this maintenance work, however due to the nature of the changes the network should be considered at-risk throughout. There may be brief periods of increased latency or packet loss whilst traffic is re-routed as this work is carefully carried out in stages.

    If you have any questions or concerns about this maintenance work, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This maintenance work has been completed sucesfully.

  • Date - 12/12/2015 23:00 - 13/12/2015 03:00
  • Last Updated - 13/12/2015 01:19
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel4
  • The cloud node running the LDeX1-cPanel4 server is experiencing network problems. We are currently investigating whether we can resolve these problems or if we will have to restart the LDeX1-cPanel4 server on another nodes.

    Update: We have been unable to resolve the networking issues on the affected node, so we have powered it down and are booting the LDeX1-cPanel4 server back up on another node.

    Update: The LDex1-cPanel4 server is back online running on a different cloud node. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 27/11/2015 19:48 - 27/11/2015 20:51
  • Last Updated - 27/11/2015 20:58
Router instability (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • We are currently aware of some router instabilities that are causing some customers to have trouble accessing our network. We're working on resolving the issue and will update this status message as more information is available.

    Update: 

    We experienced some network instability this evening which manifested itself as packet loss or intermittent connection problems for some users. The sporadic nature of this issue made it difficult to track down exactly what was happening and thus what was causing it, however from our initial investigations we believe that high our LDeX1-RT1 router was the primary culprit.

    This router has temporarily been removed from the network and since that was done at approximately 18:25 the problems seems to have been resolved and everything seems stable. We are continuing to closely monitor the network in case of any further issues.

    We will examine the LDeX1-RT1 router in detail and if we believe that it is safe to do so we will reintroduce it into the network later this evening.

    Update:

    Following the network instability earlier this evening, we have successfully completed a software upgrade on the problematic LDeX1-RT1 router to bring it in line with the software version that we are already running on our THN-RT1 and LDeX2-RT1 routers and reintroduced it into the network. We have also removed LDeX1-RT2 router from the network, upgraded it to the same software version and reintroduced it into the network. This emergency maintenance work was completed without any service affecting impact on customer traffic.

    Full redundancy has now been restored to the network and we will continue to monitor the network closely in order to ensure that there are no further issues. We are hopeful that this newer software version will correct the bug which we believe caused this evening's network instability and was also responsible for the issue on the 6th of November.

    Please accept our sincere apologies for the inconvenience caused by this evening's network issues and please don't hesitate to let us know if you have any questions.

  • Date - 26/11/2015 17:54 - 26/11/2015 18:25
  • Last Updated - 26/11/2015 23:05
Network instability (Resolved)
  • Priority - Critical
  • Affecting System - AS41000
  • We have currently seeing significant instability across the AS41000 network.

    Update: The network is currently stable and normal service has resumed. We are investigating the cause of this issue.

    Update: The network has remained stable over the past hour. From examining log files, we believe that the issue originated on the LDeX1-RT1 router where a process crashed, although we are still investigating what caused this.

  • Date - 06/11/2015 13:35 - 06/11/2015 13:47
  • Last Updated - 06/11/2015 14:58
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We have lost remote access to the TMA01/Japetus server

    Update: We are unable to access the local console on the server using the KVMoIP, so we are carrying out an emergency reboot

    Update: The server is back online and the alerts have cleared. We are investigating what caused the server to lock up, however we believe that normal service should now have resumed. Please accept our apologies for the inconivnience caused. If you are still having any problems then please contact our helpdesk in the usual manner.

  • Date - 27/10/2015 19:02 - 27/10/2015 19:23
  • Last Updated - 27/10/2015 19:26
Network disruption (Resolved)
  • Priority - High
  • Affecting System - AS41000
  • We have received a number of alerts from our external monitoring system indicating a problem with connectivity to our network. We are investigating this as a matter of urgency.

    Update: One of our upstream providers seem to be experiencing major network issues which affected any traffic that was arriving over our connections to them. We have disabled our connections to the upstream network provider in question and the AS41000 network now appears to be stable on our remaining upstream provider. We will keep this connection disabled until we receive confirmation that the cause of these issues has been identified and resolved. In the mean time, the network should be considered "at risk" due to the reduced levels of redundancy.

    Update: The upstream provider in question suffered an unexpected reboot on a router when making a routine change that should not have otherwise been service affecting. They have placed a change freeze on their network and will not implement any further non-emergency work until they have identified the underlying cause of the issue with the router manufacturer and implemented a full fix. They have advised us that the network is stable in the mean time and so we have re-enabled our connections to them and the network is now fully redundant once again.

  • Date - 24/08/2015 23:48 - 25/08/2015 23:53
  • Last Updated - 25/08/2015 01:03
DoS attack against server (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We experienced a Denial of Service (DoS) attack against the TMA01/Japetus server which caused a disruption to service between 19:27 and 19:38.

    We have been able to block the malicous traffic on our core network in order to restore service, which seems to be effective for now. We continue to monitor the situation in case of any changes in the attack which might cause further issues.

  • Date - 07/06/2015 19:27 - 07/06/2015 19:38
  • Last Updated - 07/06/2015 20:12
Edge switch reload (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1-SW01
  • We will be performing a firmware update on the LDeX1-SW01 edge switch on 01/06/2015 between 21:00 and 23:59.

    Any devices connect to this switch will lose network connectivity when the switch is rebooted in order to load the new firmware.

    This firmware update will require the switch to be rebooted twice during the maintenance window and so there will be two periods of disruption to your service, each lasting a few minutes. 

    Whilst we do not expect to encounter any problems with this firmware update, we have a cold spare switch on standby just in case.

    Please accept our apologies for any inconvenience that this maintenance work will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The first reboot has been completed successfully. We are now copying the second firmware file onto the switch in preparation for the second reboot.

    Update: The second reboot has been completed successfully and the switch is now running on the latest firmware. Each of the reboots took approximately 2 minutes. All network connectivity to the affected systems should be back to normal. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 01/06/2015 21:00 - 01/06/2015 23:59
  • Last Updated - 01/06/2015 21:19
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 28/05/2015. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 20 minutes across two reboots. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 28/05/2015 21:00 - 28/05/2015 23:59
  • Last Updated - 28/05/2015 22:35
VPS node unreachable (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS3
  • We are currently unable to reach the LDeX1-VPS3 VPS node

    Update: The VPS node appears to have unexpectedly rebooted. We are currently investigating the cause of this.

    Update: The VPS node is back online and VPS have booted back up.

    Update: Unfortunately there are no log entries from immediately before the reboot to indicate what may have caused this so we are currently unable to determine exactly why this happened. We are monitoring the server closely in case of any further issues. All customer VPS should be back online as of ~11:56. Please get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 04/05/2015 11:51 - 04/05/2015 11:56
  • Last Updated - 04/05/2015 12:14
Cloud server upgrades (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 cloud server platform
  • We will be carrying out a series of hardware upgrades to our cloud server platform in LDeX1 between 21:00 on 30/03/2015 and 02:00 on 31/03/2015 in order to increase performance and expand the capacity of the platform.

    During this time there may be some brief interuptions to network connectivity as well as periods of degraded performance. The plaform should be considered at risk for the duration of this maintenance

    Update: This maintenance work was ceusfully completed without incident.

  • Date - 30/03/2015 21:00 - 31/03/2015 02:00
  • Last Updated - 01/04/2015 11:49
LDeX1-VPS3 Unavailable (Resolved)
  • Priority - High
  • We have identified a problem with one of our SolusVM nodes, LDeX1-VPS3. The node is currently offline and all virtual servers running on it are also offline. We are on-site and investigating and will restore service as soon as possible.

    Update: We have identified and resolved an issue with the RAID card on this server and are monitoring the RAID array rebuilding. Affected VPS may have a read only filesystem as we result of this problem and will require a reboot in order to correct this.  Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems or require any further assistance.

  • Date - 30/03/2015 19:29 - 30/03/2015 19:53
  • Last Updated - 30/03/2015 20:51
Loss of connectivity to firewall (Resolved)
  • Priority - Critical
  • Affecting System - Legacy firewall cluster 1
  • We are currently investigating a loss of connectivity to the TMA01/Japetus and TMA02/Enigma servers behind the legacy firewall cluster 1 HA pair.

    This does not affect any other firewall clusters or servers.

    Update: The firewall pair in question is now passing traffic again and all affected services have been restored. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems

  • Date - 27/03/2015 02:42 - 27/03/2015 04:05
  • Last Updated - 27/03/2015 04:08
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We are currently investigating a loss of service affecting clients hosted on the TMA02/Enigma server.

    Update: We have been unable to gain access to the server either remotely or via the local console, so we are carrying out an emergency reboot in order to regain access and restore service.

    Update: We have performed the emergency reboot and the server is booting back up again. Currently it is performing a filesystem check due to not being shut down cleanly.

    Update: The server is now back online and normal service has been restored. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems

  • Date - 18/03/2015 12:30 - 18/03/2015 12:52
  • Last Updated - 18/03/2015 12:55
LDeX1-cPanel3 unavailable (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-cPanel3
  • We are currently experiencing an outage with one of our shared cPanel server, LDeX1-cPanel3. We are currently investigating and will update this status as soon as more information is available.

    Update 1537: LDeX1-cPanel3 has completed booting back up folllowing an unexpected restart of some of the hardware nodes it was running on. Service is now restored.

  • Date - 13/03/2015 15:15
  • Last Updated - 13/03/2015 15:40
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 UPS (A-side and B-side)
  • We have been notified by LDeX that the UPS vendor will be performing routine preventative maintenance on the UPS infrastructure in LDeX1 on 23/02/2015 between 09:00 and 17:00.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously.

    During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode.

    This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed.

    Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available.

    All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your device is connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The UPS maintenance work has been successfully completed without any impact. Full redundancy has been restore on all power infrastructure.

  • Date - 23/02/2015 09:00 - 23/02/2015 17:00
  • Last Updated - 23/02/2015 16:39
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 28/01/2015. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 11 minutes across one reboot. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 28/01/2015 21:00 - 28/01/2015 23:59
  • Last Updated - 10/02/2015 23:03
Unexpected reboot of VPS node (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS2
  • We have experienced an unexpected reboot of the LDeX1-VPS2 node and are investigating what has caused this.

    Update: We have made some configuration changes to the node based on the last log entries before the unexpected reboot.

    Update: We have expereinced further unexpected reboots of this node and are continuing to investgate the cause of the problem.

    Update: We have downgraded the version of the kernel used on this node to bring it in line with other nodes. We are hopeful that this has fully resolved the issue, however we are continuing to monitor the node closely in case of any further problems.

  • Date - 29/01/2015 00:27 - 29/01/2015 07:50
  • Last Updated - 29/01/2015 08:48
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 Network
  • One of our upstream network providers will be carrying out maintenance on two of our connections between 00:00 and 06:00 on 06/01/2015. We will gracefully remove these connections prior to the maintenance work in order to eliminate any impact to customer traffic.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance, however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: Our upstream provider has rescheduled the maintenance for 27/01/2015.

    Update: This work was completed without incident.

  • Date - 27/01/2015 00:00 - 27/01/2015 06:00
  • Last Updated - 27/01/2015 21:31
Server upgrades. (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We are currently carrying out maintenace on TMA03/Tsung to upgrade it with new hardware. This maintenace window will continue until 09:00am on 11th January whilst websites are transitioned to the new server. Any relevant updates will be posted to this status message.

    Update: Acount settings and data have been migrated to the new server hardware successfully. All websites are back online and functioning. SSL certificates yet yet to be migrated for those sites with a certificate, this should be complete in the next hour.

    Update: All SSL certificates are now installed.

    Update: Server upgrade complete, if you experience any problems please open a ticket.

  • Date - 10/01/2015 21:00 - 11/01/2015 10:00
  • Last Updated - 11/01/2015 13:09
Packet loss affecting VPS (Resolved)
  • Priority - High
  • Affecting System - All LDeX1 VPS customers
  • Following on from the network instability between 12:38 and 12:53 which affected all customers on the AS41000 network, VPS customers in LDeX1 would have seen approximately 50% packet loss due to problems with a LACP aggregated bundle between our LDeX1-SW01 edge switch and our LDeX1-CSW2 core switch.

    The LACP aggregated bundle in question was disabled at approximately 13:10 and normal service resumed. We are continuing to investigate what caused this problem and how we can restore the bundle to normal operation.

    Please accept our apologies for the inconvenience that today's disruption caused and don't hesitate to get in touch with our helpdesk if you are still experiencing any problems or if you have any questions.

  • Date - 17/12/2014 12:53 - 18/12/2014 13:10
  • Last Updated - 18/12/2014 14:04
Network instability (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • Between 12:38 and 12:53 we experienced problems with our LDeX1-RT1 router blackholing packets which caused a loss of connectivity inside our network.

    LDeX1-RT1 was isolated from the network whilst further investigations could be carried out As a result, all traffic was failed over to LDeX1-RT2 and normal service was resumed with reduced redundancy.

    The LDeX1-RT1 router has now been returned to normal operation and the network is fully redundant again. 

    Please accept our apologies for the inconvenience that this disruption caused and don't hesitate to get in touch with our helpdesk if you are still experiencing any problems or if you have any questions.

  • Date - 18/12/2014 12:38 - 18/12/2014 12:53
  • Last Updated - 18/12/2014 13:59
Packet loss (Resolved)
  • Priority - High
  • Affecting System - AS41000 Network
  • Our monitoring systems have alerted us to packet loss affecting both of our connections to one of our upstream network providers.

    We have currently disabled our connections to the provider in question, which has resolved the packet loss issues but means that we are currently running on a single upstream network provider and so the AS41000 network should currently be considered at-risk.

    We have raised this issue with the provider in question and are waiting for an explanation of what caused the issues that we experienced and whether it has been resolved before we reintroduce their connections to the network.

    Update: The packet loss was caused by a DDoS attack against another customer connected to the same router as us. The upstream network provider has put filters in place to block this attack and we have seen no further problems with our connections in the mean time, so have reintroduced the upstream network provider to the AS41000 network. The AS41000 network is now fully redundant again.

  • Date - 15/12/2014 19:13 - 15/12/2014 19:19
  • Last Updated - 16/12/2014 14:38
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are currently unable to access the TMA03/Tsung server remotely. This could be a repeat of the kernel panic which this server experienced two weeks ago.

    Update (1725): The server issue has been resolved and access to the server has been restored.

  • Date - 13/12/2014 16:29 - 13/12/2014 17:25
  • Last Updated - 13/12/2014 17:40
Server unresponsive (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are currently unable to access the TMA03/Tsung server remotely.

    Update: We are not seeing any video output from the server to our KVM, so we have dispatched a technician to physically inspect the server.

    Update: The technician was unable to find any problem with the server, however it was unresponsive locally as well so has been rebooted and should be back online shortly.

    Update: The server is now back online and normal service has been restored. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 01/12/2014 09:56 - 01/12/2014 10:47
  • Last Updated - 01/12/2014 10:48
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • Wehave been forced to carry out an emergency reboot of the TMA03/Tsung server due to service affecting problems that this server is currently experiencing. Please accept our apologies for the inconvinience caused.

    Update: The server is back online and normal service has been restored. Once again, please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 25/11/2014 11:24 - 25/11/2014 11:57
  • Last Updated - 26/11/2014 12:09
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 13/11/2014. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 17 minutes across two reboots. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 13/11/2014 21:00 - 13/11/2014 23:59
  • Last Updated - 13/11/2014 23:56
Emergency core switch reboot (Resolved)
  • Priority - Critical
  • Affecting System - LDeX1-CSW1
  • We are currently performing a reboot of one of our core switch stacks in order to resolve a connectivity issue affecting all customers in LDeX1.

    Update: The core switch has been rebooted and normal service has been restored.

  • Date - 13/11/2014 10:56 - 13/11/2014 11:05
  • Last Updated - 13/11/2014 12:49
Core switch crash (Resolved)
  • Priority - High
  • Affecting System - LDeX1-CSW1
  • We have just experienced a core switch crash in LDeX1.

    One of the physical Juniper EX4200 switches which makes up the logical LDeX1-CSW1 virtual chassis crashed and rebooted. The remaining physical switch in LDeX1-CSW1 took over some of the traffic whilst the VRRP default gateway IP addresses on VLANs directly connected to the problem switch will have failed over to the second virtual chassis of core switches (LDeX1-CSW2).

    Unfortunately, any customers who are single homed and directly connected onto the affected core switch will have lost connectivity whilst the switch in question crashed and rebooted. Such customers are in the minority as we provision all services with redundant connections by default.

    We believe that the root cause of this reboot may have been as a result of a software bug related to routine configuration work which was taking place at the time and so will be looking to upgrade all of the core switches to the latest stable version of JUNOS in the near future.

    All affected services should be restored as of 13:29 (approximately 3 minutes of disruption). If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 06/11/2014 13:26 - 06/11/2014 13:29
  • Last Updated - 06/11/2014 13:43
Network at-risk (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 Network
  • One of our upstream network providers is currently experiencing problems with their network and as such we have removed their routes from our network. Clients may have noticed a brief period of  disruption whilst re-routing occured.

    We have sufficient capacity on our remaining connections to carry all of our traffic, however the AS41000 network is currently running with reduced redundancy and so should be considered at-risk until we are satisfied that the carrier in question has restored stability to their network.

    If you have any questions or are experiencing any problems, then please do not hesitate to get in touch with our helpdesk.

    Update: Our upstream network provider has informed us that a power supply for one of the devices in their rack failed, which in turn caused a circuit breaker to trip and so the router which terminates our connections lost power.

    When the circuit breaker was reset and power to the rack was restored, one of the line cards in the router in question did not return to service correctly and so needed to be removed and re-inserted before it would function normally again.

    Our network is now fully redundant again and the upstream network provider in question does not anticipate any further problems related to this, however they are shipping spare parts to the data centre just in case.

  • Date - 12/10/2014 17:09 - 13/10/2014 18:06
  • Last Updated - 14/10/2014 10:29
Routine facility maintenance work (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 data centre generator LV meter
  • We have been notified by London Data eXchange that they will be carrying out work on the LV metering for the backup generator in the LDeX1 facility between 08:30 and 17:00 on 08/10/2014 as part of their planned preventative maintenance programme.

    This work will involve upgrading the LV metering on the feed from the backup diesel generator and as such the LDeX1 facility may be operating as a level of reduced redundancy whilst this work is being carried out and as such services should be considered "at-risk".

    Generator power may be unavailable for periods during this work, however no work is being carried out on the utility mains power infrastructure. The three facility UPS have between 45 and 65 minutes run time at the current load.

    Once the work on the LV metering for the backup diesel generator is complete, the facility operator will also be carrying out their quarterly full load test, which will involve a simulated mains power failure in order to verify that all backup power infrastructure is functioning correctly.

    All racks will continue to be fed from two of the facility UPS via independent A+B feeds throughout the duration of the maintenance work. All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available. All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your devices are connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The facility operator have confirmed that the work has been completed successfully.

  • Date - 08/10/2014 08:30 - 08/10/2014 17:00
  • Last Updated - 08/10/2014 12:32
DDoS attack against AS41000 (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • We are currently experiencing a DDoS (distributed denial of service) attack against our network directed towards one of our customers. This is causing high latency and packet loss on our network which will result in pages loading slowly or not at all. We are actively investigating and will post updates as the situation develops.

    Update: The attack subsided before we were able to put our mitigation measures in place, however we have identified the target IP addresses and are ready to filter the traffic upstream should the attack resume. If you are still experiencing any connectivity problems then please contact us in the normal manner. Please accept our apologies for the inconvenience caused.

  • Date - 28/09/2014 16:39 - 28/09/2014 16:47
  • Last Updated - 28/09/2014 17:19
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on 25/09/2014. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 12 minutes across two reboots. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 25/09/2014 21:00 - 25/09/2014 23:59
  • Last Updated - 25/09/2014 22:50
Read only filesystem (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • The filesystem for the /var partition on the TMA02/Enigma server has become read only, so we are performing an emergency reboot on the server to run a filesystem check.

    Update: The filesystem check has been completed sucesfully and all services appear to be running normally again. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 25/09/2014 07:59 - 25/09/2014 08:39
  • Last Updated - 25/09/2014 08:43
Router software update (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1-RT2
  • Following the update of the software on one of our border routers (LDeX1-RT1) on 11/09/2014, we will be performing the same update on our remaining border router (LDeX1-RT2) on 18/09/2014 between 22:00 and 23:59. This will require us to reboot the router so the AS41000 network will be running at reduced redundancy whilst this is carried out. The maintenance work itself should take less than 30 minutes to complete and we expect the at-risk period to be much less than this.

    We will divert traffic away from this router whilst the maintenance is taking place, so we do not expect any noticable impact to customers. The AS41000 network should however be considered to be at-risk during this time as we will be running with only one border router.

    This is essential maintenance in order to ensure the ongoing stability and security of our network Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance work has been completed without any impact to traffic on the network and the LDeX1-RT1 router is back online running the new version of JUNOS. Full network redundancy has been restored.

  • Date - 18/09/2014 22:00 - 18/09/2014 23:59
  • Last Updated - 18/09/2014 22:28
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are carrying out an emergency reboot of the TMA03/Tsung server due to service affecting problems that this server is currently experiencing. Please accept our apologies for the inconvinience caused.

    Update: The server is back online and normal service has been restored. Once again, please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 16/09/2014 16:57 - 16/09/2014 17:08
  • Last Updated - 16/09/2014 17:09
Router software update (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1-RT1
  • We will be performing a software update on one of our border routers (LDeX1-RT1) on 11/09/2014 between 22:00 and 23:59. This will require us to reboot the router so the AS41000 network will be running at reduced redundancy whilst this is carried out. The maintenance work itself should take less than 30 minutes to complete and we expect the at-risk period to be much less than this.

    We will divert traffic away from this router whilst the maintenance is taking place, so we do not expect any noticable impact to customers. The AS41000 network should however be considered to be at-risk during this time as we will be running with only one border router.

    The second router (LDeX1-RT2) will remain on the present software version for a week whilst we ensure that there are no unexpected issues with the new software version in product use and then be upgraded in a second maintenance window on 18/09/2014.

    This is essential maintenance in order to ensure the ongoing stability and security of our network Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance work has been completed and the LDeX1-RT1 router is back online running the new version of JUNOS.

  • Date - 11/09/2014 22:00 - 11/09/2014 23:59
  • Last Updated - 11/09/2014 23:14
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing scheduled maintenance including essential software updates on TM03/Tsung between 23:00 and 23:59 on the 04/08/2014. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 04/08/2014 23:00 - 04/08/2014 23:59
  • Last Updated - 04/08/2014 23:43
Network disruption (Resolved)
  • Priority - Critical
  • Affecting System - AS41000
  • We have received a number of alerts from our external monitoring system indicating a problem with connectivity to our network.

    Update: One of our routers has suffered a software issue which caused services to crash and restart. This took approximately 10 minutes to complete as the router had to re-establish several BGP sessions and learn several hundred thousand routes. During this time the router was not passing traffic, however appeared to other routers on the network as if it was stable so the normal failover mechanisms did not kick in. We believe that this is related to the version of software running on this router and so will be arranging a scheduled maintenance window in order to update it once we have investigated further. Please accept our apologies for the inconvenience caused and don't hesitate to let our helpdesk know if you are still experiencing any problems with your services.

  • Date - 29/07/2014 00:00 - 30/07/2014 00:10
  • Last Updated - 30/07/2014 00:26
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are currently carrying out an emergency reboot on the TMA03/Tsung server due to some problems experienced with the backups running on this server.

    Update: Whilst rebooting, the server has detected an issue with the filesystem one one of the partitions and so is performing a filesystem check in order to verify the integrity of the data stored on this partition.

    Update: The filesystem check has completed and the server has finished booting. Normal service has been restored. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 27/07/2014 11:12 - 27/07/2014 11:28
  • Last Updated - 27/07/2014 11:31
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 08/07/2014. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 6 minutes across a single reboot. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 08/07/2014 21:00 - 08/07/2014 23:59
  • Last Updated - 14/07/2014 14:06
Routine facility maintenance work (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 data centre LV panel
  • We have been notified by London Data eXchange that they will be carrying out work on the LV panel breakers and metering in the LDeX1 facility between 08:30 and 17:00 on 14/07/2014 as part of their planned preventative maintenance programme.

    This work will involve upgrading the LV metering as well as visual inspection, servicing and testing of the main ACB breaker. The LDeX1 facility may be operating as a level of reduced redundancy whilst this work is being carried out and as such services should be considered "at-risk".

    The LV panel is split into three sections, one serving each of the three UPS. Only one section of the LV panel will be worked on at any time. Currently all three facility UPS have at least 1 hours run time and generator backup power will remain available throughout the maintenance work if required.

    All racks will continue to be fed from two of the facility UPS via independent A+B feeds throughout the duration of the maintenance work.
    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available.

    All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your device is connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: This work will take place on 14/07/2014, not 04/07/2014 as originally advised.

    Update: This work has now been completed without incident.

  • Date - 14/07/2014 08:30 - 14/07/2014 17:00
  • Last Updated - 14/07/2014 14:04
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000
  • One of our upstream network providers has informed us that they need to carry out some emergency maintenance work which will affect our connections to them.

    This work will take place at some point within the next 90 minutes, so we have temporarily disabled our connections to them in order to prevent any disruption.

    This means that the AS41000 network should currently be considered "at-risk" as we are running without normal levels of redundancy.

    Update: the work took slightly longer than expected, however our upstream network provider has given us the all clear and we have re-enabled our connections to them. The network is running at full redundancy again.

  • Date - 18/06/2014 14:16 - 18/06/2014 17:17
  • Last Updated - 18/06/2014 17:45
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 02/06/2014. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 7 minutes across a single reboot. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 02/06/2014 21:00 - 02/06/2014 23:59
  • Last Updated - 07/06/2014 20:16
Network at-risk (Resolved)
  • Priority - Medium
  • Affecting System - AS41000
  • One of our upstream network carriers are experiencing problems with their network due to a hardware failure. We have removed their routes from our network and there is no disruption to any services as we have sufficient capacity on our remaining connections to carry all of our traffic.

    As a result of this, the AS41000 network is currently running with reduced redundancy and so should be considered at-risk until we are satisfied that the carrier in question has restored stability to their network.

    If you have any questions or are experiencing any problems, then please do not hesitate to get in touch with our helpdesk.

    Update: The carrier have provided us with an update that they have experienced a supervisor card failure on their router in Telehouse North which is having a serious effect on the stability of their network. They have tried to swap the card out with a spare, however this seems to be faulty. They are attempting to source a replacement, however do not have an ETA at the moment.

    Update: The carrier has advised that the network should be stable and so we have brought our connections to them back up. Full redundancy has been restored to the AS41000 network.

  • Date - 05/06/2014 17:06 - 07/06/2014 19:54
  • Last Updated - 07/06/2014 20:15
DDoS attack against AS41000 (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • We are currently experiencing a DDoS (distributed denial of service) attack against our network directed towards one of our customers. This is causing high latency and packet loss on our network which will result in pages loading slowly or not at all. We are actively investigating and will post updates as the situation develops.

    Update: We have identified the target IP address and are now filtering the traffic upstream. We have observed a period of stability and normal performance from the network, so we believe that this mitigation has resolved the issue. If you are still experiencing any connectivity problems then please contact us in the normal manner. Please accept our apologies for the inconvenience caused.

  • Date - 09/05/2014 12:13 - 09/05/2014 12:45
  • Last Updated - 09/05/2014 12:59
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 28/04/2014. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update:
     The maintenance work has been completed and the server is back online. Total downtime was approxiumately 12 minutes across two reboots. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 28/04/2014 21:00 - 28/04/2014 23:59
  • Last Updated - 03/05/2014 09:05
Network disruption (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • Some customers may have experienced network disruption between 20:34 and 20:36 on 28/03/2014 due to an issue with one or our upstream network providers.

    We've seen our BGP sessions to one of our upstream network providers flap and the provider in question has confirmed that they are investigating a problem with their network in the LDeX1 data centre.

    In the mean time, we have disabled our BGP sessions to them whilst they stabilise their network. This means that we are operating with a reduced level of redundancy and as such the network should be considered at-risk.

    Update: We have received confirmation from the upstream network provider that this issue was the result of a DDoS against another of their customers and that the issue has been resolved. We turned our BGP session to the provider in question back on at roughly 21:20 and restored full redundncy to the network. No further issues have been observed since.

  • Date - 28/03/2014 20:34 - 28/03/2014 20:36
  • Last Updated - 29/03/2014 12:15
Upstream network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers will be carrying out maintenance on two of our connections between 13:00 and 14:00 on 24/02/2014. We will gracefully remove these connections prior to the maintenance work in order to eliminate any impact to customer traffic.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance, however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This work was completed sucesfully at 13:50 with 5 minutes loss of redundancy and no disruption to service.

  • Date - 24/03/2014 13:00 - 24/03/2014 14:00
  • Last Updated - 24/03/2014 15:21
DDoS attack against AS41000 (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 Network
  • We are currently experiencing a DDoS (distributed denial of service) attack against our network directed towards one of our customers. This is causing high latency and packet loss on our network which will result in pages loading slowly or not at all. We are actively investigating and will post updates as the situation develops.

    Update: We have identified the customer that is being targeted by this attack and are working to mitigate the situation to restore normal service to our other clients.

    Update: The attack has been successfully mitigated and the network has now stabilised. If you are experiencing issues please open a ticket in the usual manner.

  • Date - 23/02/2014 10:34
  • Last Updated - 23/02/2014 11:58
Network disruption (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • Some customers may have experienced network disruption in the form of packet loss or increased latency between 10:20 and 10:30 on 06/02/2014.

    We have two connections to each of our two upstream network providers and traffic passing over one of these connections to one provider was affected. Once we identified which connection was causing these issues, we immediately disabled it and saw network connectivity return to normal.

    The provider in question has advised us that an aggregation port in their network became saturated due to a 12Gbps DDoS against another customer. Any traffic traversing this link would have been effected.

    The provider is already in the process of installing a number of new devices on their network in order to remove such aggregation points and significantly increase their capacity to deal with large DDoS attacks.

    The attack has now been filtered and we have re-introduced the connection in to our network. We are continuing to closely monitor traffic in case there are any reoccurrences.

    Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk in the usual manner if you are still experiencing any problems.

  • Date - 06/02/2014 10:20 - 06/02/2014 10:30
  • Last Updated - 06/02/2014 10:49
Routine faclity UPS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 UPS (A-side and B-side)
  • We have been notified by LDeX that the UPS vendor will be performing routine preventative maintenance on the UPS infrastructure in LDeX1 on 04/02/2014 between 09:00 and 17:00.

    This maintenance work will be carried out on both the A-side and B-side UPS systems separately. At no point will both systems be under maintenance simultaneously.

    During this maintenance period, it may be necessary for either of the UPS units to be placed into bypass mode.

    This means that any equipment connected to the feed supplied by that UPS unit will be running on raw mains power and as such should be considered at-risk in case there is an outage on the utility mains feed.

    Generator backup power will remain available throughout the maintenance work if required.

    All devices with dual power supplies should be connected to both the A-side and B-side PDUs, so in event of any problems on one of the feeds will still have the other feed available.

    All devices with single power supplies should be fed from our in-rack ATS units, which can switch between the two feeds fast enough that connected devices do not see any loss of power.

    If you have any questions or if you wish to double check how your device is connected, please don't hesitate to get in touch with our helpdesk in the usual manner.

    Update: The UPS maintenance work has been successfully completed without any impact. Full redundancy has been restore on all power infrastructure.

  • Date - 04/02/2014 09:00 - 04/02/2014 17:00
  • Last Updated - 04/02/2014 15:44
Filesystem problems on TMA02/Engima (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-Plesk1
  • We are currently investigating proglems with the filesystem on TMA02/Engima. We have taken the server offline whilst we attempt to repair the filesystem.

    Update: The filesystem has been repaired and the server is now back onine. We are continuing to monitor the server in case of any further issues. If you are still experiencing any problems with accounts hosted on the server, please don't hestiate to get in touch with our helpdesk in the usual manner.

    Update: One of the hard drives in the TMA02/Engima server has failed and is currently being replaced.

    Update: The failed hard drive has been replaced and the RAID array is rebuilding. We are also carrying out a filesystem check on the server to make sure that no further corruption has opccured.

    Update: The filesystem check has completed and the system is back onlne. The RAID array rebuild is contiuning. Normal service should be resumed, however you may notice slight performance degredation whilst the rebuild completes.

    Update: The RAID array rebuild has finished. 

  • Date - 01/02/2014 22:12 - 02/02/2014 13:42
  • Last Updated - 02/02/2014 18:15
MySQL upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be upgrading the version of MySQL on TMA03/Tsung on 29/12/2013 between 21:00 and 23:59. During this period MySQL will be intermitantly unavaialble and PHP scripts making use of MySQL functions may return HTTP 500 internal server errors.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The MySQL upgrade work has started

    Update: The MySQL upgrade has been completed and normal service has been restored. Downtime was approxmately 55 minutes due to some unexpected complications.

  • Date - 29/12/2013 21:00 - 29/12/2013 23:59
  • Last Updated - 29/12/2013 23:04
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing scheduled maintenance including essential software updates on TM03/Tsung between 21:00 and 23:59 on the 19/12/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: During the reboot a fsck of one of the partitions has been forced due to the time since the last check. This partition is quite large so may take some time to finish. The server should still be back online before the end of the maintenance window.

    Update: The server is now back online again and normal service has been resumed. Please accept our apologies for the longer than expected interruption to service as part of this scheduled maintenance. Total downtime was approxiumately 1 hour, 30 minutes. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 19/12/2013 21:00 - 19/12/2013 23:59
  • Last Updated - 19/12/2013 23:04
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance including essential software updates on TM02/Enigma between 21:00 and 23:59 on the 19/12/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 6 minutes. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 19/12/2013 21:00 - 19/12/2013 23:59
  • Last Updated - 19/12/2013 21:46
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 19/12/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 6 minutes. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 19/12/2013 21:00 - 19/12/2013 23:59
  • Last Updated - 19/12/2013 21:45
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 07/11/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 7 minutes. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 07/11/2013 21:00 - 07/11/2013 23:59
  • Last Updated - 07/11/2013 21:33
High load (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-cPanel1
  • We are investigating periods of high load affecting TMA03/Tsung. This is affecting all services hosted on TMA03/Tsung.

    Update: We are currently carrying out some emergency maintenance work in order to stabelise this server.

    Update: We are currently rebooting the server in order to load a new kernel

    Update: The server is back up and we are monitoring it closely to see if this has improved the situation

    Update: The server appears to be performing normally now, however we continue to closely monitor performance.

  • Date - 07/11/2013 06:21 - 07/11/2013 11:56
  • Last Updated - 07/11/2013 12:31
DDoS attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • One of our customers experienced a DDoS attack starting at approximately 15:35. This attack caused network wide packet loss, which brought under control by 15:42. We are continuing to monitor the network in case the attack shifts. Please accept our apologies for the inconvenience.

  • Date - 04/11/2013 15:35 - 04/11/2013 15:42
  • Last Updated - 04/11/2013 16:11
LDeX1-VPS1 (Resolved)
  • Priority - Critical
  • We are currently experiencing a problem with one of our VPS nodes (LDeX1-VPS1) which has causec clients virtual servers hosted on that node to go offline. We've identified the problem and are working to resolve the issue now.

    Update (21h39): The VPS node has been rebooted and service has been restored.

    Update (02h30): The issue has reoccured, tests indicate an issue with the RAID controller which we are currently investigating.

    Update (03h05): We've identified an issue with the RAID controller in LDeX1-VPS1 and are currently attempting to resolve the issue as a matter of extreme urgency. At this time we believe that all data is intact and will bring nodes back online as soon as possible. At this time this outage is likely to extend into the afternoon of Saturday 21st September.

    Update (11h45): This issue has necessitated replacing a part which unfortunately was not readily available. This part has been ordered and will be delivered to the data centre at around 15h30 this afternoon. We aim to have it replaced as fast as possible and restore service soon after.

    Update (16h43): The spare part which was required to bring LDeX1-VPS1 back online has arrived at the datacentre and will be fitted to the server straight away.

    Update (17h44): The new part has been installed and the server booted back up, we are working on restoring access to data so that virtual servers can boot back up.

    Update (18h33): Server has been booted successfully and access to data restored. All virtual servers are now booting up. This issue is now considered resolved.

  • Date - 20/09/2013 20:15 - 21/09/2013 17:36
  • Last Updated - 21/09/2013 17:36
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • In response to the brief peirods of network disrption on 31/08/2013 due to IS-IS instability between our border routers and core switches, we have been auditing the network configuration and will be carrying out some remidiatory network maintenance across the AS41000 network on 05/09/2013 between 22:00 and 23:59.

    Whilst we will be doing everything possible to minimise the impact to clients during this maintenance, you may see further periods of disruption to internet connecitivity during this maintenance work. We understand that the inconvinience that any network disruption causes our clients, however this work is essential in order to ensure the future stability of your network connecvitiy. If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This work has been sucesfully completed without any impact on customer's services.

  • Date - 05/09/2013 22:00 - 05/09/2013 23:59
  • Last Updated - 05/09/2013 22:32
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be performing maintenance on one of our routers between 21:00 and 23:59 on 19/08/2013 which will require us to completely remove the router from the network. All traffic will be moved off the affected router before the maintenance starts.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance. however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

  • Date - 19/08/2013 21:00 - 19/08/2013 23:59
  • Last Updated - 04/09/2013 17:17
Network instability (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • Between 16:46 and 17:10 on 31/08/2013 customer will have experienced 1-2 minute disruptions to their connectivity due to network instability. This was caused by problems with the IS-IS negotiations between our borders routers and core switches and we believe that this has now been resolved, however we will continue to closely monitor the network incase of any furuther problems.

    Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk if you are still experiencing problems.

  • Date - 31/08/2013 16:46 - 31/08/2013 17:10
  • Last Updated - 04/09/2013 17:00
Internal e-mail server migration (Resolved)
  • Priority - High
  • Affecting System - Freethought Internet internal e-mail
  • We will be moving the internal Freethought Internet e-mail system to new servers on 02/08/2011 between 21:00 and 23:59, which will mean that we have no access to our Freethought e-mail accounts during this time, inculding the support@freethought-internet.co.uk e-mail address.

    This will not affect client services in any way, however it will mean that we are unable to receive support tickets etc. whilst the upgrade takes place. If you need support during this time, then please raise a ticket directly through the Freethought customer billing and support portal at https://portal.freethought-internet.co.uk or tweet us on @freethoughtnet.

    Update: The e-mail server migration has been sucesfully completed and normal service has been resumed.

  • Date - 02/08/2013 21:00 - 02/08/2013 23:59
  • Last Updated - 03/08/2013 02:36
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance including essential software updates on TM02/Enigma between 21:00 and 23:59 on the 31/07/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Unforutnately, despite a clean shutdown this server required a filesystem check on boot. This is currently being run and may take a little time to complete.

    Update: The server is now back online again and normal service has been resumed. Please accept our apologies for the longer than expected interruption to service as part of this scheduled maintenance.

  • Date - 31/07/2013 21:00 - 31/07/2013 23:59
  • Last Updated - 31/07/2013 22:40
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing scheduled maintenance including essential software updates on TM03/Tsung between 21:00 and 23:59 on the 31/07/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update:
     The maintenance work has been completed and the server is back online. Total downtime was approxiumately 6 minutes. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 31/07/2013 21:00 - 31/07/2013 23:59
  • Last Updated - 31/07/2013 22:13
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 31/07/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The maintenance work has been completed and the server is back online. Total downtime was approxiumately 7 minutes. If you are still experiencing any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 31/07/2013 21:00 - 31/07/2013 23:59
  • Last Updated - 31/07/2013 22:12
LDeX1 network at-risk (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • One of our upstream carriers has informed us that they will be performing emergency network maintenance that will affect one of our circuits between LDeX1 and THN between 23:00 and 23:59 on 27/07/2013.

    During this time, they will be restarting a process on a metro switch that forms part of their network backbone which has experienced a partial software failure. All traffic will be moved off the affected connection before the maintenance starts.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance. however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: Full network redundancy has been restored.

  • Date - 27/07/2013 23:00 - 27/07/2013 23:59
  • Last Updated - 28/07/2013 12:35
LDeX1 network at-risk (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • Level(3) have informed us that they will be carrying out the annual building wide power test at their Goswell Road datacentre on 06/07/2013.

    During this test they will simulate a full mains power failure in order to evaluate the automating startup of the generators and switch over of the UPS from mains to generator supply.

    Level(3) do not expect any interruption to the protected power in the datacentre during these works, however the site is to be considered "at-risk" for the duration in case of any unexpected failures of the critical power infrastructure.

    Electrical staff will be on site throughout the testing work and can quickly restore the facility to mains power if required.

    One of the network paths connecting the Freethought/AS41000 network in LDeX1 to our network in Telehouse North is routed via the Level(3) Goswell Road datacentre and so will be "at-risk" whilst this work is being carried out due to a potential reduction in network redundancy.

    The second network path connecting these two PoPs together does not go via the Level(3) Goswell Road datacentre and therefore will not be affected in the unlikely event of a power outage.

    If you have any questions about this work of the impact that it may have on your services, then please do not hesitate to get in touch with our helpdesk.

    Update: The test has been completed without incident.

  • Date - 06/07/2013 00:00 - 06/07/2013 23:59
  • Last Updated - 25/07/2013 20:53
Reduced DNS server redundancy (Resolved)
  • Priority - Medium
  • Affecting System - Tertiary DNS server
  • We are currently investigating the failure of our tertiary DNS server tertiary.freethought-dns.co.uk (93.89.92.172). The primary and secondary DNS servers are working as normal so customers should not see any noticable impact.

    Update: The company that provides the off-site server for our tetiary DNS have advised that they expeirneced a problem with the underlying SAN which in turn required a restart of the SAN controllers and all virtual machines. Ther tertiary DNS server is back online and we are monitoring it closely in case of any further problems.

  • Date - 20/07/2013 22:42 - 21/07/2013 23:32
  • Last Updated - 21/07/2013 00:20
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream carriers has informed us that they will be performing emergency network maintenance that will affect one of our circuits between LDeX1 and THN between 23:00 and 23:59 on 26/05/2013.

    During this time, they will be replacing a switch on their network which has begun showing signs of a hardware failure. All traffic will be moved off the affected connection before the maintenance starts.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance. however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update:
     This maintenance work has been completed successfully and the connection to this upstream carrier has been restored. If you are having any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 26/05/2013 23:00 - 26/05/2013 23:59
  • Last Updated - 29/06/2013 21:39
Unexpected reboot (Resolved)
  • Priority - High
  • Affecting System - LDeX1-VPS3
  • We have experienced two unexpected reboots of the LDeX1-VPS3 node this evening. These reboots were the result of attempts to unload a kernel module in response to investigations into packet loss problems affecting this server.


    Alternative measures have now been put in place and the LDeX1-VPS3 node is stable. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk if you are still experiencing problems.

  • Date - 01/06/2013 19:37 - 02/06/2013 21:04
  • Last Updated - 02/06/2013 21:22
Scheduled server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 14/05/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The TMA01/Japetus server has been successfully rebooted following the maintenance work and all services are functioning normally. Please get in touch with our helpdesk if you are experiencing any problems with accounts hosted on this server.

  • Date - 14/05/2013 22:00 - 14/05/2013 23:59
  • Last Updated - 14/05/2013 23:22
Web site issue on TMA01/Japetus (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  • We are currently investigating an issue on the TMA01/Japetus server with web sites not loading.

    Update: Unfortunately we have been unable to gain access to the server either locally or remotely and so it is now being rebooted.

    Update: The server has been successfully rebooted and normal server has been restored. Please accept our apologies for the inconvenience caused and don't hesitate to get in touch with our helpdesk if you are still experiencing problems with your services.

  • Date - 13/02/2013 13:12 - 13/02/2013 13:59
  • Last Updated - 31/03/2013 11:50
Issues with TMA01 (Resolved)
  • Priority - Critical
  • Affecting Server - TMA01/Japetus
  • We are currently investigating issues with the TMA01/Japetus server.


    Update: Whilst the server is online, we are unable to access it via RDP or KVMoIP. An engineer has been dispatched to the data centre to investigate further.

    Update: After investigating, the decision was made to reboot the server. This has been completed successfully and normal server has been restored. Please accept our apologies for the inconvenience caused and don't hesitate to get in touch with our helpdesk if you are still experiencing problems with your services.

  • Date - 31/03/2013 10:38 - 31/03/2013 11:49
  • Last Updated - 31/03/2013 11:49
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 28/03/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The TMA01/Japetus server has been successfully rebooted following the maintenance work and all services are functioning normally. Please get in touch with our helpdesk if you are experiencing any problems with accounts hosted on this server.

  • Date - 28/03/2013 22:00 - 28/03/2013 23:59
  • Last Updated - 28/03/2013 23:58
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one IP transit connections between 22:30 and 23:30 on 26/03/2013.

    During this time, they will be upgrading the firmware on the router terminating this connection on their network. We are anticipating approximately a 15 minute loss of connectivity whilst this work is carried out. All traffic will be moved off the affected connection before the maintenance starts.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance. however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This maintenance work has been completed successfully and the connection to this upstream carrier has been restored. If you are having any problems, please don't hesitate to get in touch with our helpdesk.

  • Date - 26/03/2013 22:30 - 26/03/2013 23:30
  • Last Updated - 26/03/2013 23:24
Packet loss (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • Between approximately 12:51 and 13:09 customers may have noticed packet loss and increased latency to servers on our AS41000 network due to an inbound DDoS. This has now been mitigated and normal service has been restored.

    Please accept our apologies for the inconvenience caused by this morning's attack. Please don't hesitate to get in touch with our helpdesk if you are still experiencing any problems.

  • Date - 25/03/2013 12:51 - 25/03/2013 13:09
  • Last Updated - 25/03/2013 13:45
Packet loss (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • We are currently experiencing a DDoS attack against one of our customers which is causing high packet loss on our network. We are working to resolve this and restore normal service ASAP.


    Update: The network should now be stabilised, apologies for the inconvenience caused by this morning's attack. Please don't hesitate to get in touch with our helpdesk if you are still experiencing any problems.

  • Date - 12/03/2013 08:42 - 12/03/2013 09:00
  • Last Updated - 12/03/2013 09:10
Support system maintenance (Resolved)
  • Priority - High
  • Affecting System - Freethought helpdesk, e-mail, customer portal and web-site
  • We will be carrying out some essential scheduled maintenance work on the server that hosts our web-site, customer portal, helpdesk and e-mail systems on Thursday 21/02/2013 between 17:00 and 18:00.

    All of our public facing services hosted on this server will be unavailable whilst we carry out this maintenance work. If you need to contact us urgently during this time, please tweet us on @freethoughtnet and we will reply as soon as possible.

    Please accept our apologies for any inconvenience caused and don't hesitate to get in touch with our helpdesk if you have any questions.

    Update:
    This work was completed successfully.

  • Date - 21/02/2013 17:00 - 21/02/2013 18:00
  • Last Updated - 25/02/2013 09:43
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one of our circuits between LDeX1 and THN on 14/02/2013 between 21:00 and 23:59 which will remove this circuit from service whilst devices providing the circuit are updated and rebooted.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services delivered in LDeX1 as a result of this maintenance. however during this time, connectivity at LDeX1 should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This maintenance work has been completed successfully and full redundancy has been restored to the network.

  • Date - 14/02/2013 21:00 - 14/02/2013 23:59
  • Last Updated - 14/02/2013 21:23
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 07/02/2013. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The TMA01/Japetus server has been successfully rebooted following the maintenance work and all services are functioning normally. Please get in touch with our helpdesk if you are experiencing any problems with accounts hosted on this server.

  • Date - 07/02/2013 22:00 - 07/02/2013 23:59
  • Last Updated - 07/02/2013 23:24
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one IP transit connections between 23:00 on 04/02/2013 and 05:00 on 05/02/2013.

    During this time, the router terminating this connection on their network will be replaced and each connection moved over to the new router individually. We are anticipating approximately a 10 minute loss of connectivity whilst this work is carried out. All traffic will be moved off the affected connection before the  maintenance starts.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance. however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: Our upstream carrier has informed us that this work has been postponed due to other maintenance work overrunning. This maintenance work will be re-scehduled at a later date.

  • Date - 04/02/2013 23:00 - 05/02/2013 05:00
  • Last Updated - 05/02/2013 18:19
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one of our circuits between LDeX1 and THN on 27/01/2013 between 00:01 and 05:00 which will remove this circuit from service whilst devices providing the circuit are updated and rebooted.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services delivered in LDeX1 as a result of this maintenance. however during this time, connectivity at LDeX1 should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This maintenance work has been completed successfully without any impact to services and full redundancy has been restored.

  • Date - 27/01/2013 00:01 - 27/01/2013 05:00
  • Last Updated - 28/01/2013 19:44
Issues with LDeX1 Firewall Cluster 1 (Resolved)
  • Priority - Critical
  • Affecting System - LDeX1-FWCL1
  • We are currently investigating issues with one of the nodes in firewall cluster 1 in LDeX1. This is affecting connectivity to some of our shared servers as well as some managed customers.


    Update: This issue was caused by a firmware version on two of the cluster nodes becoming out of sync following a previous failed software update and then one of the nodes rebooting. The affected node has been rebooted and successfully upgraded which has restored normal functionality to firewall cluster 1. Please accept our apologies for the inconvenience caused and don't hesitate to get in touch with our helpdesk if you are still experiencing problems with your services.

  • Date - 27/12/2012 12:10 - 27/12/2012 12:13
  • Last Updated - 27/12/2012 12:19
Loss of service (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-cPanel1
  • We are currently investigating service affecting problems with the TMA03/Tsung server.


    Update: Unfortunately this server is not responding remotely, so a technician has been dispatched to investigate and should be on-site in approximately 30 minutes.

    Update: The technician has arrived on site and the server is being rebooted now.

    Update: The server is currently undergoing a filesystem check (fsck).

    Update: The fsck has completed and the server has completed booting. We are checking that everything is back up and running as normal. 

    Update: All functionality seems to be working as normal. Please accept our apologies for the inconvenience and don't hesitate to get in touch with our helpdesk if you are still experiencing problems.

  • Date - 23/12/2012 16:57 - 23/12/2012 19:18
  • Last Updated - 23/12/2012 19:55
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one of our circuits between LDeX1 and THN on 15/12/2012 between 00:01 and 05:00 which may cause up to 15 minutes of disruption.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services delivered in LDeX1 as a result of this maintenance. however during this time, connectivity at LDeX1 should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This work has been completed successfully without any noticeable disruption to service. If you are experiencing any problems, please get in touch with our helpdesk.

  • Date - 15/12/2012 00:01 - 15/12/2012 05:00
  • Last Updated - 15/12/2012 11:14
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 13/12/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 30 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The TMA01/Japetus server has been successfully rebooted following the maintenance work and all services are functioning normally. Please get in touch with our helpdesk if you are experiencing any problems with accounts hosted on this server.

  • Date - 13/12/2012 22:00 - 13/12/2012 23:59
  • Last Updated - 13/12/2012 23:47
Web server on TMA03 (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-cPanel1
  • We are currently investigating problems with the Apache web server software on TMA03/Tsung which is preventing any web-sites from loading. We believe that we have identified the underlying cause of this problem and hope to have it resolved shortly. Please accept our apologies for any inconvenience caused by this.

    Update:
    Normal functionality has been restored and all web sites hosted on TMA03/Tsung should be back online. If you are still experiencing problems, please don't hesitate to get in touch with our support staff in the usual manner.

  • Date - 05/12/2012 16:10 - 05/12/2012 16:39
  • Last Updated - 05/12/2012 16:41
Internal Server Errors (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are currently investigating Internal Server Error messages for PHP content hosted on TMA03/Tsung.

    Update: This has been resolved and PHP pages are once again being served correctly. Please accept our apologies for the inconvenience and don't hesitate to let us know if you are still experiencing problems.

  • Date - 12/11/2012 08:40 - 12/11/2012 08:58
  • Last Updated - 12/11/2012 09:00
Network disruption (Resolved)
  • Priority - Critical
  • Affecting System - AS41000
  • At 11:26 some customers will have experienced a brief disruption as we lost one of our connections to our upstream network providers. Our network automatically re-routed around this and full connectivity was restored via our other upstream network providers within 3 minutes.

    We have seen the connection to the affected upstream provider return to normal, however we have manually disabled the connection whilst they investigate the cause of this issue. At this time the AS41000 network should be considered "at risk" due to the reduced level of redundancy.

    Whether or not individual users were affected by this issue will depend on the route which their traffic takes to enter and leave our network. Customers whose traffic traverses our other upstream network provider under normal network operation would have seen no impact.

    Update: The affected upstream network provider have confirmed that one of their core routers crashed and rebooted. They believe they know what caused this and are working with Cisco TAC to confirm. We have seen six hours of stable connectivity from this upstream network provider, so we have re-enabled our connection to them. Some traffic is once again flowing over their network and full redundancy has been restored on AS41000.

  • Date - 09/11/2012 11:26 - 09/11/2012 11:29
  • Last Updated - 09/11/2012 17:26
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • We will be carrying out some maintenance work on the connection to one of our upstream carriers for up to 15 minutes on 05/10/2012 between 22:00 and 23:59. During this time we will gracefully shut down the connection to this upstream network provider in order to prevent any disruption to client connectivity.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services as a result of this maintenance. however during this time, connectivity should be considered "at risk" due to the reduced level of redundancy available across the network.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This maintenance work has been completed successfully and full redundancy has been restored to the network.

  • Date - 05/10/2012 22:00 - 05/10/2012 23:59
  • Last Updated - 05/10/2012 22:05
Network disruption (Resolved)
  • Priority - High
  • Affecting System - LDeX1 network
  • At 23:04 we experienced a breif network blip lasting approximately 2 to 3 minutes. This was cuased by an issue with one of our upstream providers who have confirmed that their monitoring also picked up the problem and are investigating further.

    Update: Our upstream provider has completed their investigations and found that unannounced maintenance by a third party supplier caused last night's network connectivity issue. They assure us that this has been addressed with the supplier in question.

  • Date - 01/10/2012 23:04 - 01/10/2012 23:06
  • Last Updated - 02/10/2012 15:27
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one of our circuits between LDeX1 and THN on 01/10/2012 between 21:00 and 23:00 which may cause up to 15 minutes of disruption.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services delivered in LDeX1 as a result of this maintenance. however during this time, connectivity at LDeX1 should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: All maintenance work has been completed sucesfully and full redundancy has been restored.

  • Date - 01/10/2012 21:00 - 01/10/2012 23:00
  • Last Updated - 01/10/2012 22:59
Issues with TMA02 (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-Plesk1
  • We have become aware of issues with TMA02 causing the server to currently be unavailable. We are investigating this as a matter of urgency with an aim to restore service as soon as possible. Please get in touch with our support staff if you have any questions.

    Update: We have been forced to reboot this server and it is currently running a filesystem check. We hope to have the server back online as soon as this check finishes.

    Update: The file system check has finished and TMA02/Enigma is back online. We are continuing to monitor the server in case there are any further problems. Once again, please accept our apologies for the inconvinience caused by this unscheduled disruption.

  • Date - 14/09/2012 07:00 - 14/09/2012 09:30
  • Last Updated - 14/09/2012 09:35
Upstream emergency network maintenance (Resolved)
  • Priority - High
  • Affecting System - LDeX1 network
  • We have been informed by one of our upstream network providers that they will be carrying out emergency maintenance work on a portion of their network at 23:00 on 13/09/2012. This work is service affecting on one of our two connections to this provider, so we will be disabling the affected connection beforehand in order to prevent any noticeable disruption to customers.

    The upstream network provider expects the maintenance work to take approximately 5-10 minutes to complete, however we have scheduled a 30 minute window to allow for any unexpected complications as well as testing the connection before returning it to service once the upstream network provider has given the all-clear.

    Whilst we do not anticipate any disruption to customer services, the network in LDeX1 should be considered at-risk during this maintenance window due to the reduced level of redundancy available. If you have any questions, please get in touch with our support staff.

    Update: Our upstream network provider has confirmed that this maintenance work has been completed sucefully and we have re-enabled our connectivity to them.

  • Date - 13/09/2012 23:00 - 13/09/2012 23:30
  • Last Updated - 13/09/2012 23:16
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - LDeX1 network
  • One of our upstream carriers has informed us that they will be performing network maintenance that will affect one of our circuits between LDeX1 and THN on 10/09/2012 between 00:01 and 06:00 which may cause up to 5 minutes of disruption.

    Due to the redundant nature of our network, we do not anticipate there to be any impact on customer services delivered in LDeX1 as a result of this maintenance. however during this time, connectivity at LDeX1 should be considered "at risk" due to the reduced level of redundancy available.

    If you have any questions, then please do not hesitate to get in touch with our helpdesk.

    Update: This maintenance work was completed without inident

  • Date - 10/09/2012 00:01 - 10/09/2012 06:00
  • Last Updated - 13/09/2012 07:24
DDoS attack (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • One of our customers appears to be under a DDoS attack. This caused intermittent network issues starting at approximately 20:09. These issues were mostly brought under control by 20:12, although customers may still be seeing very small amounts of packet loss whilst we investigate further. Please accept our apologies for the inconvenience.

    Update: We are continuing to monitor the status of the network. Since 20:12 packet loss levels have been reduced to <1%. We are working with the targetted client to determine the source of this attack and help them mitigate it wherever possible.

    Update: We have been monitoring for 2 hours now and haven't seen any further network issues as a result of this ongoing DDoS.

    Update: At 00:45 we saw a massive increase in DDoS traffic levels, causing further packet loss. This was mitigated for most customers by 01:10, with the remaining customers seeing service restored by 01:30. Please accept our apologies for the inconvenience and if you have any questions then please don't hestiate to contact our support staff.

  • Date - 08/09/2012 20:09 - 08/09/2012 20:12
  • Last Updated - 12/09/2012 10:47
Router crash (Resolved)
  • Priority - Critical
  • Affecting System - AS41000
  • At 01:10 we experienced a crash of our router in Telehouse North, which caused up to 3 minutes of disruption to customers depending on where their traffic enters our network whilst all inbound traffic was automatically re-routed via our other router in LDeX1.

    Further interruptions to inbound traffic arriving on the AS41000 network in Telehouse North may have been seen at 01:37 and 01:43 whilst we attempted to troubleshoot the issues with the Telehouse North router.

    The Telehouse North router has been rebooted and is back online, however all inbound traffic is currently arriving on our network via the LDeX1 router whilst we investigate the underlying cause of this crash on the Telehouse North router.

    Because of the reduced redundancy currently available, the network should be considered "at-risk" until we have conducted further tests and returned the Telehouse North router to service.

    Please accept our apologies for the inconvenience caused by the unexpected crash of this router. If you have any questions, please don't hesitate to get in touch with our helpdesk.

    Update: We are carrying out further tests on the Telehouse North router this morning, including a software update. The router will remain withdrawn from service until we are happy that we have identified and resolved the cause of the crash to prevent any future re-occurances.

    Update: We have successfully upgraded the software on the Telehouse North router and returned it to service. We will continue to monitor this router in case of any further problems. Once again, please accept our apologies for the inconvenience caused.

  • Date - 02/09/2012 01:10 - 02/09/2012 01:13
  • Last Updated - 02/09/2012 23:58
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network connectivity providers has informed us that they will be replacing switches on their network in order to increase capacity on 25/08/2012 at 14:00.

    They have issued a maintenance window from 10:00 on 25/08/2012 to 22:00 on 26/08/2012, however the actual service affecting work is only expected to take place between 10:00 and 11:00 on 26/08/2012. The period either side of this service affecting work should be considered "at risk" as pre and post installation checks will be taking place.

    Where possible we will route traffic away from the affected links prior to the maintenance. Customers in Maidenhead should consider the network "at risk" during this period due to the reduced level of redundancy available. Customers in Manchester may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst spanning tree re0converges around the new network topology.

  • Date - 25/08/2012 10:00 - 26/08/2012 22:00
  • Last Updated - 02/09/2012 10:45
Packet loss (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • One of our upstream network providers is experiencing packet loss on their network in Maidenhead. We have reported this to them and disabled our connections to their network.

    Update: The network provider in question has confirmed that this was due to multiple DDoS attacks against their customers saturating their Maidenhead network, however we are awaiting a full RFO. Currently we have left our connectivity to this provider disabled whilst we monitor the stability of their network.

    Update: We have not seen any further issues overnight and so have re-enabled our connections to this upstream network provider. We continue to monitor the network closely for any re-occurances. Please let our support staff know if you are experiencing any problms.

  • Date - 22/08/2012 19:27
  • Last Updated - 23/08/2012 07:25
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network connectivity providers has informed us that they will be replacing switches on their network in order to increase capacity on 12/08/2012 at 14:00.

    They have issued a maintenance window from 10:00 on 11/08/2012 to 22:00 on 12/08/2012, however the actual service affecting work is only expected to take place between 14:00 and 14:30 on 12/08/2012. The period either side of this service affecting work should be considered "at risk" as pre and post installation checks will be taking place.

    Where possible we will route traffic away from the affected links prior to the maintenance. Customers in Maidenhead should consider the network "at risk" during this period due to the reduced level of redundancy available. Customers in Manchester may experience brief interuptions to connectivity in the form of packet loss and/or increased latency.

    Update: We have just seen a series of network flaps on this upstream network provider's circuits, so it appears that something service afecting has happened outside of the 14:00-14:30 window. As a result, we have disabled our connectivity via this upstream network provider wherever possible.

    Update: We are currently seeing a complete loss of service across all of our upstream suppliers - we are investigating if this is an issue with our routers.

    Update: A software bug on our routers had been triggered and both Maidenhead routers required a reboot in order to restore service.

    Update: Our upstream network provider has confirmed that they brought the maintenance forward to 10:00 without notifying us. They have advised that we shouldn't expect any further interuption to service. Please accept our aplogies for the inconvinience caused and don't hestitate to get in touch with our support staff if you are still experiencing any problems.

  • Date - 11/08/2012 10:00 - 12/08/2012 22:00
  • Last Updated - 16/08/2012 00:15
Loss of connectivity to Synergy House (Resolved)
  • Priority - Critical
  • Affecting System - All Manchester services
  • We are currently investigating a loss of connectivity to our equipmemnt at Telecity Synergy House in Manchester.

    Update: We are currently waiting for Telecity remote hands staff to investigate our connectivity provider's switch in Synergy House to determine why it is not responding remotely.

    Update: Connectivity has been restored, we are awaiting a full report from our connectivity provider on the exact cause. In the mean time, please get in touch if you are still experiencing any problems with connectivity to Telecity Synergy House.

    Update: Telecity have confirmed that last night's issue was due to the loss of power on one of their devices in Telecity Williams House. We are waiting for further details from Telecity as to exactly what happened, why the network didn't re-route arround the failed device and why this took so long to diagnose and resolve. Please accept our apologies for the inconvinience caused by this extended loss of service.

  • Date - 06/08/2012 00:37 - 06/08/2012 02:16
  • Last Updated - 06/08/2012 09:58
Loss of connectivity to Synergy House (Resolved)
  • Priority - Critical
  • Affecting System - All Manchester services
  • We are currently investigating a loss of connectivity to our equipmemnt at Telecity Synergy House in Manchester.

    Update: Connectivity has been restored and we have confirmed that we did not lose power to any of our network equipment. We are currently waiting for further information from Telecity.

    Update: Telecity have confirmed that they were carrying out work to migrate former UK Grid customers over to the Telecity network which failed and was rolled back. We were previously informed that this work had been cancelled and was going to be re-scheduled for a later date. We are seeking further clarification from Telecity.

  • Date - 05/07/2012 00:56 - 05/07/2012 01:04
  • Last Updated - 18/07/2012 22:47
Network disruption (Resolved)
  • Priority - Critical
  • Affecting System - AS41000
  • We have received a large number of alerts from our monitoring system indicating significant connectivity issues and we are currently investigating.

    Update: Connectivity seems to have been restored and all alerts have cleared

    Update: It appears that both of the Maidenhead routers unexpectedly rebooted. We are investigating what could have caused this but have confirmed that there was no loss of power to any equipment. All services should be back up and running since 22:21, if you are still experiencing any problems or if you have any questions please do not hestitate to get in touch with our support staff in the usual manner.

  • Date - 18/07/2012 22:16 - 18/07/2012 22:21
  • Last Updated - 18/07/2012 22:47
Manchester network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Manchester network
  • One of our upstream network providers in Manchester have informed us that they will be performing maintanence work on their network between 23:00 on 04/07/2012 and 02:00 on 05/07/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The Manchester network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream supplier has advised us that they have cancelled this maintenance and will be re-scheduling it for a later date.

  • Date - 04/07/2012 23:00 - 05/07/2012 02:00
  • Last Updated - 18/06/2012 20:53
Manchester network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Manchester network
  • One of our upstream network providers in Manchester have informed us that they will be performing maintanence work on their network between 23:00 on 03/07/2012 and 02:00 on 04/07/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The Manchester network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream supplier has advised us that they have cancelled this maintenance and will be re-scheduling it for a later date.

  • Date - 03/07/2012 23:00 - 04/07/2012 02:00
  • Last Updated - 18/06/2012 20:53
Manchester network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Manchester network
  • One of our upstream network providers in Manchester have informed us that they will be performing maintanence work on their network between 23:00 on 02/07/2012 and 02:00 on 03/07/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The Manchester network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream supplier has advised us that they have cancelled this maintenance and will be re-scheduling it for a later date.

  • Date - 02/07/2012 23:00 - 03/07/2012 02:00
  • Last Updated - 18/06/2012 20:53
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers has advised us that they will be performing maintenance work upgrading their London network between 22:00 and 23:00 on 12/06/2012.

     

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The AS41000 network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream provider has informed us that this work has been completed sucessfully

     

  • Date - 12/06/2012 22:00 - 12/06/2012 23:00
  • Last Updated - 18/06/2012 00:44
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers has advised us that they will be performing maintenance work upgrading their London network between 22:00 and 23:00 on 11/06/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The AS41000 network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream provider has informed us that this work has been completed sucessfully

  • Date - 11/06/2012 22:00 - 11/06/2012 23:00
  • Last Updated - 18/06/2012 00:44
Manchester network maintenance (Resolved)
  • Priority - High
  • Affecting System - Manchester network
  • One of our upstream network providers has advised us that they will be performing maintenance work on a circuit that runs from Telecity Williams House in Manchester to Telehouse North in London between 20:00 on 26/05/2012 and 06:00 on 27/05/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The Manchester network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream network provider has confirmed that this work was completed successfully.

     

  • Date - 26/05/2012 20:00 - 27/05/2012 06:00
  • Last Updated - 18/06/2012 00:41
Reduced network redundancy (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • At 18:34 the link to one of our upstream providers in Telehouse East went down. We are currently investigating with the third party supplier to estalbish the cause of this failure. All traffic was automatically re-routed arround this failure, however customers traversing this link may have noticed a brief loss of service during this re-routing. We have disabled this connection whilst we investigate further in order to prevent the connection from flapping.

    In the mean time, we are running with reduced network redunancy. We still have connections to multiple upstream network providers, however one of those providers is now only connected via Telehouse North. We will continue to monitor the network to ensure that everything is running in a stable manner with no performance degredation.

    Update: The link to Telehouse East has come back up, however we are waiting for further information. We will keep the connection over this link disabled whilst we continue to monitor the stability.

    Update: Our upstream network provider has confirmed that they experienced a core network issue which caused the loss of service. The underlying cause of this was a misconfiguraiton which resulted in a loop saturating the multiple 10Gbps links between two core routers. They have confirmed that their network has remained stable since the loop was removed so we have brought our BGP sessions to them back up.

  • Date - 12/06/2012 18:34 - 12/06/2012 18:50
  • Last Updated - 12/06/2012 21:36
Reduced network redundancy (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • At 10:58 the link to one of our upstream providers in Telehouse East went down. We are currently investigating with the third party supplier to estalbish the cause of this failure. All traffic was automatically re-routed arround this failure.

    In the mean time, we are running with reduced network redunancy. We still have connections to multiple upstream network providers, however one of those providers is now only connected via Telehouse North. We will continue to monitor the network to ensure that everything is running in a stable manner with no performance degredation.

    Update: This appears to be a problem with a cable to our upstream network supplier in Telehouse East. We are currently waiting for this cable to be re-crimped in order to restore the connection.

    Update: The cable has been re-crimped and the link has come back up. We have not yet returned this connection to service yet whilst we monitor the stability.

    Update: The connection has been stable for two hours since the cable was re-crimped, so we have brought the BGP sessions back up and traffic is once again flowing across this connection. We will continue to monitor this in order to ensure that the connection remains stable.

    Update: We have not seen any further problems with this connectino since it was returned to service.

  • Date - 08/06/2012 10:58 - 08/06/2012 16:33
  • Last Updated - 08/06/2012 22:39
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers has advised us that they will be performing maintenance work upgrading their London network between 22:00 on 02/06/2012 and 04:00 on 03/06/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The AS41000 network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream network provider completed this scheduled maintenance work successfully and we have re-enabled our connectivity to them.

  • Date - 02/06/2012 22:00 - 03/06/2012 04:00
  • Last Updated - 03/06/2012 12:44
Internet issues (Resolved)
  • Priority - Critical
  • Affecting System - AS41000
  • We are currently investigating a number of alerts from our external monitoring system. So far our troublehsooting points to a major issue somewhere in London affecting a number of ISPs. We will update this status message as soon as we have any further information.

    Update: This seems to be an issue with LINX (the London Internet eXchange) which is causing widespread disruption to the UK internet. Traffic has been routed around LINX and connectivity seems to be stable again for the moment, although you may be experiencing packet loss due to congestion with some third parties where they do not have enough spare capacity to function without LINX. We will issue another update as soon as we have any further information, in the mean time please consider the network "as risk".

    Update: LINX have issued an all-clear on both the Juniper and Extreme LANs and traffic levels across LINX have returned to normal levels. We are continuing to montior the network and haven't seen any further disruptions so far. We will provide details of exactly what caused today's issues as and when we receive them.

    Update: LINX and associated UK connectivity has remained stable since the all-clear was issued. Whilst we still do not know specific details, it seems that a member connected to LINX caused a network loop on both the Juniper and Extreme LANs and the loop protection on the Juniper LAN failed whilst there is apparently no loop protection on the Extreme LAN. LINX are investigating with Juniper to find out why the loop protection did not function as intended as well as investigating what protection can be put in place on the Extreme LAN.

  • Date - 31/05/2012 15:51 - 31/05/2012 16:22
  • Last Updated - 02/06/2012 15:33
Network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers has advised us that they will be performing maintenance work upgrading their London network between 22:00 on 01/06/2012 and 04:00 on 02/06/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The AS41000 network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream network provider completed this scheduled maintenance work successfully and we have re-enabled our connectivity to them. There will be similar work taking place tonight on another leg of our connectivity as per our other status announcement

  • Date - 01/06/2012 22:00 - 02/06/2012 04:00
  • Last Updated - 02/06/2012 15:27
PHP upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be upgrading the version of PHP on TMA03/Tsung from 5.2.17 to 5.3.13 on 31/05/2012 between 21:00 and 23:59.

    The disruption from this work should be minimal whilst Apache is restarted to load the new version of PHP. You may wish to review PHP's documentation on the 5.2.x to 5.3.x upgrade at http://www.php.net/manual/en/migration53.incompatible.php in order to ensure that none of the backwords incompative changes will affect your code.

    This is essential maintenance to ensure the ongoing stability and security of this server as the PHP 5.2.x branch is no-longer supported. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The PHP version on TMA03/Tsung has been successfully upgraded to 5.3.13

  • Date - 31/05/2012 21:00 - 31/05/2012 23:59
  • Last Updated - 31/05/2012 22:09
Scheduled UPS maintenance in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received a notification from BlueSquare/Lumison/Pulsant that they will be carrying out routine maintenance on the N+1 UPS stack at the BlueSquare House in Maidenhead between 21:00 on the 28/05/2012 and 01:00 on the 29/05/2012.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be no redundancy on the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: BlueSquare/Lumison/Pulsant have confirmed that this work was completed successfully.

  • Date - 28/05/2012 21:00 - 29/05/2012 01:00
  • Last Updated - 30/05/2012 09:57
Maidenhead network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead network
  •  

    One of our upstream network providers has advised us that they will be performing maintenance work on their Maidenhead network between 21:00 and 23:59 on 17/05/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The Maidenhead network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: The upstream network provider in question has confirmed that this work has been completed sucessfully.

     

  • Date - 17/05/2012 21:00 - 17/05/2012 23:59
  • Last Updated - 23/05/2012 12:42
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance including essential software updates on TM02/Enigma between 22:00 and 23:59 on the 15/05/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: TMA02/Enigma is currently running a fsck due to the length of time since the file system was last checked

    Update: The server is now back online and normal service has been resumed. Please get in touch with our support staff if you are having any problems.

  • Date - 15/05/2012 22:00 - 15/05/2012 23:59
  • Last Updated - 15/05/2012 23:59
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing scheduled maintenance including essential software updates on TM03/Tsung between 22:00 and 23:59 on the 15/05/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The server has been upgraded successfully and normal service has been resumed. Please get in touch with our support staff if you are having any problems.

  • Date - 15/05/2012 22:00 - 15/05/2012 23:59
  • Last Updated - 15/05/2012 23:16
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 15/05/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The server has been upgraded successfully and normal service has been resumed. Please get in touch with our support staff if you are having any problems.

  • Date - 15/05/2012 22:00 - 15/05/2012 23:59
  • Last Updated - 15/05/2012 23:16
Backup server upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - KSP810-Back2
  • We will be upgrading the backup server software on KSP810-=Back01 to the latest version between 21:00 and 22:00 on the 15/05/2012. During this time all backup jobs will be paused and file restores won't be available until after the software upgrade has been completed.

    This is essential maintenance to allow us to take advantage of new features available in the latest version of the backup software as well as ensuring the ongoing stability and security of this service. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The upgrade has been completed successfully and all backups have been re-enabled.

  • Date - 15/05/2012 21:00 - 15/05/2012 22:00
  • Last Updated - 15/05/2012 23:14
Maidenhead network maintenance (Resolved)
  • Priority - High
  • Affecting System - Maidenhead network
  • One of our upstream network providers has advised us that they will be performing maintenance work on their Maidenhead network between 18:30 on 05/05/2012 and 00:00 on 06/05/2012.

    During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed over alternative paths. The Maidenhead network should be considered "at risk" during this period due to the reduced level of redundancy available.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: Our upstream provider had confirmed that they have completed their maintenance work

  • Date - 05/05/2012 18:30 - 06/05/2012 00:00
  • Last Updated - 09/05/2012 23:12
Manchester network disruption (Resolved)
  • Priority - Critical
  • Affecting System - Manchester network
  •  

    At 21:16 our internal and external monitoring systems alerted us to a loss of service on our Manchester network. This was quickly traced to an issue with one of our upstream network providers that supply the backhaul connectivity between the Manchester and Maidenhead networks.

    Traffic was re-routed over an alternative path and normal service resumed at 21:24. Whilst we were carrying out further investigations, a second disruption occurred at 21:45 due to the upstream provider's network flapping. Our staff quickly made changes to the Manchester network in order to completely disconnect this upstream supplier and normal service was once again resumed at 21:48.

    We have completed a review of the configuration on the Manchester network in order to ensure that all connectivity via the upstream provider in question is disabled and will remain so until we are confident that their network has been stabilised and we have been provided with a full explanation as to the root cause of tonight's issues.

    Please accept our apologies for the inconvenience caused. If you have any questions or are still experiencing problems then please do not hesitate to get in touch with our support staff in the normal manner.

     

  • Date - 09/05/2012 21:16 - 09/05/2012 21:24
  • Last Updated - 09/05/2012 23:08
Network outage (Resolved)
  • Priority - Critical
  • Affecting System - All Freethought services
  • We are currently investigating a network wide outage and will update you once we have any further information.

    Update: Maidenhead network services have been restored. We are still investigating the Manchester network.

    Update: Manchester network connectivity has also been restored, however the Manchester network is currently "at-risk" due to an upstream network failure.

    Update: Full redundancy has been restored to the Manchester network

  • Date - 06/05/2012 19:50 - 06/05/2012 20:13
  • Last Updated - 06/05/2012 21:49
At risk: AS41000 network (Resolved)
  • Priority - High
  • Affecting System - AS41000 network
  • We have been informed by one of our upstream network providers that they will be performing maintenance on their core network between 19:00 on 24/04/2012 and 01:00 on 25/04/2012.

    During this period we will disable all of our transit connectivity traversing their network in order to minimise any disruption, so we will be running with a reduced level of redundnacy.

    We do not expect this work to be service affecting due to the resilient nature of our network, however customers may see a brief disruption to connectivity whilst traffic is re-routed.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: We have re-enabled all of our upstream transit connectivity and the network is now back to normal levels of redundancy.

  • Date - 24/04/2012 19:00 - 25/04/2012 01:00
  • Last Updated - 25/04/2012 00:38
Network interuption (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • Some customers will have experienced a network interruption between 15:10 and 15:15 as we lost connectivity to one of our upstream suppliers. This will only have been noticeable to some customers depending on the route taken to connect to our network. We are currently working with the supplier to establish the cause of this issue and ensure that it does not reoccur. Please accept our apologies for any inconvenience caused.

  • Date - 19/04/2012 15:10 - 19/04/2012 15:15
  • Last Updated - 19/04/2012 15:40
Manchester network maintenance (Resolved)
  • Priority - High
  • Affecting System - Manchester network
  • We will be carrying out some mainteance on our Manchester networkon Thursday the 12th of April between 22:00 and 23:59. During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: This work has been cancelled and will be re-scheduled at a later date.

  • Date - 12/04/2012 22:00 - 12/04/2012 23:59
  • Last Updated - 12/04/2012 19:58
Manchester network maintenance (Resolved)
  • Priority - High
  • Affecting System - Manchester network
  • We will be carrying out some mainteance on our Manchester networkon Thursday the 23rd of February between 22:00 and 23:59 to add additional redundancy. During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: This maintenance work has been completed with minimal disruption to clients. Please contact our support staff in the usual manner if you are having any problems.

  • Date - 23/02/2012 22:00 - 23/02/2012 23:59
  • Last Updated - 11/04/2012 14:47
Emergency reboot (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  •  

    We are currently carrying out an emergency reboot of the TMA01/Japetus server in order to resolve an issue with the off-site backups. All services hosted on this server will be unavailable until the reboot has completed. Please accept our apologies for the inconvenience and if you have any questions don't hesitate to get in touch with our support staff in the normal manner.

    Update: TMA01 is back online and all services have been restored. Please let us know if you are still having any problems accessing services hosted on this server.

     

  • Date - 30/03/2012 23:00 - 30/03/2012 23:18
  • Last Updated - 30/03/2012 23:18
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance including essential software updates on TM02/Enigma between 13:30 and 14:30 on the 27/12/2010. This will require us to reboot the server at least once and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: TMA02/Enigma has been rebooted successfully and normal services have been fully restored.

     

  • Date - 27/12/2010 13:30 - 27/12/2010 14:15
  • Last Updated - 14/03/2012 16:13
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance including essential software updates on TM02/Enigma between 22:00 and 23:59 on the 29/02/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-scheduled for 15/03/2012

    Update: This work has been completed successfully. Please contact our support staff in the usual manner if you are still unable to reach your web-site hosted on this server.

  • Date - 29/02/2012 22:00 - 29/02/2012 23:59
  • Last Updated - 13/03/2012 23:49
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 29/02/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-scheduled for 15/03/2012

    Update: This work has been completed successfully. Please contact our support staff in the usual manner if you are still unable to reach your web-site hosted on this server.

  • Date - 29/02/2012 22:00 - 29/02/2012 23:59
  • Last Updated - 13/03/2012 23:43
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing scheduled maintenance including essential software updates on TM03/Tsung between 22:00 and 23:59 on the 29/02/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-scheduled for 15/03/2012

    Update: This work has been completed successfully. Please contact our support staff in the usual manner if you are still unable to reach your web-site hosted on this server.

  • Date - 29/02/2012 22:00 - 29/02/2012 23:59
  • Last Updated - 13/03/2012 23:42
At risk: Maidenhead network (Resolved)
  • Priority - Medium
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • BlueSquare/Lumison (now known as Pulsant) have notified us that they will be carrying out maitneance work on the Optical Distribution Frame in their racks at the Telecity Harbour Exchange Lodnon data centre. This work will affect the connectivity between BlueSquare Maidenhead and Telecity Harbour Exchange.

    This is scheduled to be carried out between 22:00 on the 9th of March and 04:00 on the 10th of March as well as 22:00 on the 10th of March and 04:00 on the 11th of March. During this time the ODF will be examined to identify and repair a fault causing a detiration of the optical signal.

    This work is not execpected to be service affecting due to the resilient nature of the backhaul network that serves the BlueSquare Maidenhead campus, however all services hosted in BlueSquare House should be considered "at risk" during this time as there will be no further protection against fibre breaks on the remaining backhaul network.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance work has been completed successfully

  • Date - 09/03/2012 22:00 - 11/03/2012 04:00
  • Last Updated - 13/03/2012 23:03
At risk: Maidenhead network (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead network
  • BlueSquare/Lumison have notified us that they will be carrying out maitneance work on the leg of their fibre ring that connects the BlueSquare Maidenhead campus to the Harbour Exchange Lodnon data centre. This work will be carried out between 21:00 on the 3rd of March and 09:00 on the 4th of March, during which time a new section of fibre will be spliced in to the existing path in order divert the fibre around a major national rail project.

    This work is not execpected to be service affecting due to the resilient nature of the backhaul network that serves the BlueSquare Maidenhead campus, however all services hosted in BlueSquare House should be considered "at risk" during this time as there will be no further protection against fibre breaks on the remaining backhaul network. Customers may notice increased latency or packet loss whilst traffic is re-routed across alternative paths.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance work was completed successfully without any interruption to client services.

  • Date - 03/03/2012 21:00 - 04/03/2012 09:00
  • Last Updated - 09/03/2012 12:08
Network instability (Resolved)
  • Priority - Critical
  • Affecting System - AS41000 network
  • Our external monitoring has alerted us to a loss of service via one of our upstream network providers. These alerts have now cleared, however any customers whose traffic was traversing this provide will have experienced disruption between 02:24 and 02:34. Please accept our apologies for any inconvinience caused.

    We have opened a ticket with the provider in questions and are continuing to monitor the network closely. Please contact our support staff in the usual manner if you are still experiencing any problems.

  • Date - 26/02/2012 02:24 - 26/02/2012 02:34
  • Last Updated - 26/02/2012 02:57
Manchester network maintenance (Resolved)
  • Priority - High
  • Affecting System - Manchester network
  • We will be carrying out some mainteance on our Manchester networkon Thursday the 16th of February between 22:00 and 23:59 to add additional redundancy. During this time, customers may experience brief interuptions to connectivity in the form of packet loss and/or increased latency whilst traffic is re-routed.

    If you have any questions about this maintenance work, please contact our support staff in the usual manner.

    Update: There was no impact to any customer services during this window.

  • Date - 16/02/2012 22:00 - 16/02/2012 23:59
  • Last Updated - 19/02/2012 16:13
Loss of connectivity to TMA03/Tsung (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We are currently investigating a loss of connectivity to our TMA03/Tsung server.

    Update: We are unable to access this server remotely via KVMoIP, so technicians are now at the rack investigating locally.

    Update: This appears to be a hardware issue with the server. We are currently working to identify the faulty compontent(s) and restore service via our on-site spare hardware.

    Update: Normal service should now be restored for all clients. We are currently working through the server to make sure that everything is working properly on the new hardware, but initial checks seem to indicate that everything is back up. Please accept our appologies for the inconvinience caused by this downtime.

  • Date - 13/02/2012 13:28 - 13/02/2012 16:45
  • Last Updated - 13/02/2012 16:52
Manchester network at-risk (Resolved)
  • Priority - High
  • Affecting System - Manchester network
  • One of our providers has informed us that they will be carrying out emergency maintenance work on a UPS in their Manchester network at 23:00 on 09/02/2012. This work is required to remove the UPS for safety reasons due to the batteries requiring immediate replacement.

    Our Manchester network should be considered at-risk for the 30 minute period whilst this work is taking place and customers may potentially experience brief packet loss or latency whilst our upstream network provider is carrying out this maintenance.

    Update: Our upstream network provider have informed us that this was completed without incident. We were monitoring the network closely throughout this work and did not see any impact to our services.

  • Date - 09/02/2012 23:00 - 09/02/2012 23:30
  • Last Updated - 10/02/2012 08:18
Manchester network disruption (Resolved)
  • Priority - High
  • Affecting System - Manchester network
  • Customers on our Manchester network may have noticed disruption on some routes at 00:01 on 08/02/2012 for around 3 minutes due to a loss of service from one of our upstream providers.

    Customers traversing this affected route would have been unable to access our Manchester network until traffic was automatically re-routed by BGP. Please accept our apologies for any incovinience caused.

    The Manchester network should currently be considered "at risk" due to this loss of redundancy. If you are still experiencing any problems or have any questions then please don't hesitate to get in touch with our support staff via the usual methods.

    Update: We have seen service restored to the affected upstream provider and are currently awaiting an RFO report. Connectivity to the upstream provider in question remains disabled at this time to minimise any potential service affecting issues from a re-occurance.

  • Date - 08/02/2012 00:01 - 08/02/2012 00:03
  • Last Updated - 09/02/2012 16:28
Loss of connectivity (Resolved)
  • Priority - Critical
  • Affecting System - Synergy House
  • We have lost connection to our Synergy House PoP in Manchester. We are currently investigating this with the facility how have confirmed that they are aware of the problem.

    Unfortunately this caused some issues with traffic destined for our BlueSquare House network in Maidenhead for some customers who arrive on AS41000 via the Manchester PoP. Traffic has now re-routed and full service should have been restored to all Maidenhead customers. Please get in touch with our support staff in the usual manner if you are continuing to experience problems.

    Update: Connectivty to Synergy House has been restored and we are awaiting an explenation from Telecity. This site should be considered "as risk" until we have any further information.

  • Date - 20/01/2012 14:50
  • Last Updated - 20/01/2012 15:25
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 on the 18/01/2012 and 02:00 on the 19/01/2012. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance work is now under way.

    Update: This maintenance work has been completed and normal service has been restored.

  • Date - 18/01/2012 22:00
  • Last Updated - 19/01/2012 01:02
Packet loss (Resolved)
  • Priority - Medium
  • Affecting System - AS41000 network
  • One of our upstream network providers appears to be suffering some intermitant packet loss, so we have temporerily re-routed our connectivity acorss alternative paths until this issue is resolved.

    Update: Our monitoring has shown that the affected network provider's links have been clear of packet loss for a number of hours, so we have re-introduced their connectivity into the AS41000 network. We are continuing to closely monitor packet loss and latency in order to ensure that the issue does not reoccur.

  • Date - 10/01/2012 09:09 - 10/01/2012 00:00
  • Last Updated - 10/01/2012 23:07
Network instability (Resolved)
  • Priority - High
  • Affecting System - All Freethought services
  • We are currently investigating network instability issues that may be affecting traffic from some users depending on their route into our network. It appears that one of our upstream providers are currently experiencing probelsmw tih their network and so we are re-routing traffic across alternative connections.

    Update: All Maidenhead services were stabelised at approximately 22:28, Manchester services may still be experiencing some packet loss. We are continuing to investigate.

    Update: The packet loss issues between Maidenhead and Manchester were resolved at approximately 23:10 and the network has been stable since.

    We are continuint to monitor our network closely. Please contact our support staff if you are still experiencing issues with any part of our network.

  • Date - 29/12/2011 22:25 - 29/12/2011 23:10
  • Last Updated - 29/12/2011 23:45
At risk: Manchester network (Resolved)
  • Priority - Medium
  • Affecting System - Manchester network
  • One of our connectivity providers in Manchester have informed us that they will be carrying out maintenance works on the metro Ethernet service connectivity that links Telecity Synergy House with other Telecity facility facilities in Manchester. This work is to replace a switch in order to enhance the resiliency of the service and will be taking place between 23:59 on 05/12/2100 and 04:00 on 06/12/2011.

    This work is not execpected to be service affecting due to the resilient nature of our network, however during this time, we will be operating with reduced network redundancy and therefore the network should be considered "at risk". Additionally, you may notice some brief periods of packet loss whilst network traffic is re-routed onto other paths. There may also be periods of slightly increased latency due to less optimal backup routes being used.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work was completed without incident

  • Date - 05/12/2011 23:30 - 06/12/2011 04:00
  • Last Updated - 15/12/2011 11:20
At risk: Maidenhead network (Resolved)
  • Priority - Medium
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • BlueSquare/Lumison have notified us that they will be carrying out maitneance work on the leg of their fibre ring that connects the BlueSquare Maidenhead campus to the Harbour Exchange Lodnon data centre. This work will be carried out between 22:00 on the 10th of December and 09:00 on the 11th of December, during which time a new section of fibre will be spliced in to the existing path in order divert the fibre around a major national rail project.

    This work is not execpected to be service affecting due to the resilient nature of the backhaul network that serves the BlueSquare Maidenhead campus, however all services hosted in BlueSquare House should be considered "at risk" during this time as there will be no further protection against fibre breaks on the remaining backhaul network.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: BlueSquare/Lumison have issued the all clear that this work has been completed sucessfully.

  • Date - 10/12/2011 22:00 - 11/12/2011 09:00
  • Last Updated - 15/12/2011 11:19
Network re-configuration (Resolved)
  • Priority - Medium
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • We will be undertaking some reconfiguration work on our Maidenhead routers and core switching between 21:00 on 09/11/2011 and 01:59 on 10/11/2011 in order to increase network capacity. During this time, there may be momentary periods of increased latency or packet loss as traffic is re-routed across other switches and routers.

    Due to the redundant, failover configuration of our network we do not expect there to be any noticable impact on customer connectivity, however we will be running with a reduced level of redundancy whilst this work takes place. We will be modifying the routers and core switches individually and testing the new configuration before returning production traffic to each device.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Whilst routing traffic away from the device scheduled to be affected by this maintenance work, we encountered an issue with VRRP which caused some customers to experience a period of packet loss. This was quickly resolved by forcing an immediate failover of all VRRP interfaces to the backup router, however the control plane on the primary router became unresponsive to some commands and the decision was taken to perform an emergency reboot of the device in order to restore full control plane functionality.

    As much traffic as possible was routed away from the device, however the control plane issues meant that it was not possible to ensure that the device was completely removed from all routes. At this point, interface graphs confirmed that minimal traffic was traversing this device and the control plane issues meant that no further re-routing work was feasible so the device was rebooted.

    As expected, unfortunately some customer routes were affected in the brief period whilst the device rebooted, however the reboot finished quickly and once back up these paths were fully restored. Once the control plane was confirmed as stable and functioning normally, VRRP sessions were moved back over and traffic was gradually re-routed back on to this device without any further impact on service. The network is now back as it was before the maintenance work began and we do not expect any further interruptions to service.

    Because of the issues experienced, we have decided to re-scheduled the planned maintenance work for a later date. Please accept our apologies for any inconvenience caused and don't hesitate to get in touch with us if you have any issues.

    Update: We are re-scheduling this essential network maintenance work for 05/12/2011 at 22:00

    Update: This work has been completed sucessfully with no impact to customer services.

  • Date - 09/11/2011 21:00 - 10/11/2011 01:59
  • Last Updated - 15/12/2011 11:17
Maidenhead network disruption (Resolved)
  • Priority - High
  • Affecting System - Maidenhead network
  • Traffic flowing through our BSQ1-RT1 router in Maidenhead suffered a loss of service for 10 minutes between 23:34 and 23:44 on 05/12/2011 due to a software issue on this router which was was corrected with an emergancy reboot.

    Not all customers were affected depending on the route that traffic was taking, and some customers would have seen their service return faster as individual BGP sessions came back up after the emergancy reboot. Full service was restored by 23:44.

    Please accept our apologies for any inconvinence that this caused and please don't hesitate to get in touch with our support staff via the usual methods if you have any questions.

    Update: The same issue occured with BSQ1-RT2 between 00:02 and 00:06 on 06/12/2011, we believe that this is due to a software bug in a particular feature, for which we have now implemented a work around.

  • Date - 05/12/2011 23:34 - 05/12/2011 23:44
  • Last Updated - 06/12/2011 00:17
Scheduled UPS maintenance in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received a notification from BlueSquare/Lumison that they will be carrying out routine maintenance on the N+1 UPS stack at the BlueSquare House in Maidenhead between 21:00 on the 28/11/2011 and 01:00 on the 29/11/2011.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be no redundancy on the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: BlueSquare/Lumison have confirmed that this maintenance has been completed without incident.

  • Date - 28/11/2011 21:00 - 29/11/2011 01:00
  • Last Updated - 30/11/2011 16:22
At risk: Maidenhead network (Resolved)
  • Priority - Medium
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • BlueSquare/Lumison have notified us that they will be carrying out emergancy maitneance work on the leg of their fibre ring that connects the BlueSquare Maidenhead campus to the Harbour Exchange Lodnon data centre. This work will be carried out between 21:00 on the 29th of October and 09:00 on the 30th of October, during which time a damaged section of fibre will be replaced.

    This work is not execpected to be service affecting due to the resilient nature of the backhaul network that serves the BlueSquare Maidenhead campus, however all services hosted in BlueSquare House should be considered "at risk" during this time as there will be no further protection against fibre breaks on the remaining backhaul network.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: BlueSquare/Lumison have updates us to say that due to technical difficulties this work has been cancelled and will be re-scheduled for a later date. We will update this notice with any further further information that we recieve about the rescheduled work.

     

  • Date - 29/10/2011 21:00 - 30/10/2011 09:00
  • Last Updated - 30/11/2011 15:33
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 21:00 and 23:59 on the 10/11/2011. This will require us to reboot the server so there will be a loss of service whilst this is done.

    The main service affecting part of the maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this, however we will also be updating Microsoft SQL Server to 2008 R2 and so customers using Microsoft SQL Server (MS-SQL) databases should expect to be affected for a longer period within the maintenance window.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: All maintenance work has now been completed and normal service resumed. The SQL Server 2008 R2 upgrade was performed quickly and with minimal impact to customers using MS-SQL databases. Please get in touch with us if you have any questions or if you are experiencing any problems.

  • Date - 10/11/2011 21:00 - 10/11/2011 23:59
  • Last Updated - 10/11/2011 22:47
BlueSquare House generator/ATS maintenance (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • BlueSquare/Lumison have informed us that they will be carrying maintenance work to replace and test the ATS interlocks that form part of the control system for the diesel generator that provides long run time backup power to the BlueSquare House data centre. This work is scheduled to take place between 21:00 on the 3rd of November and 03:00 on the 4th of November.

    During this maintenance, the generator will be disconnected and so will be unable to supply backup power should there be a loss of utility/mains power to the site. Once the interlocks have been replaced, a test will be carried out to ensure that the generator starts correctly and supplies power to the UPS via the ATS should there be a loss of utility/mains power

    This is not service affecting and the site is still protected by the N+1 UPS, however the batteries will not provide long term protection to ride through an extended power cut, as such the site should be considered "at risk" whilst the maintenance is taking please.

    We will provide updates throughout the maintenance soon as any extra information is passed to us be the BlueSquare NOC team. Please feel free to contact our support staff in the usual manner should you have any questions.

    Update: BlueSquare/Lumison have informed us that the maintenance work was completed successfully

     

  • Date - 03/11/2011 21:00 - 04/11/2011 03:00
  • Last Updated - 07/11/2011 09:13
BlueSquare network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead network
  • BlueSquare/Lumison have notified us that they will be carrying out work on the fibre multiplexers for the Harbour Exchange to Telehouse East leg of their ring between 03:00 and 05:00 on the 12th of October.

    This work is not execpected to be service affecting, however during this time the Maidenhead network will be at-risk due to reduced redundancy on the BlueSquare backhaul ring.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Lumison have advised that this work was completed without incident

  • Date - 12/10/2011 03:00 - 12/10/2011 05:00
  • Last Updated - 26/10/2011 12:54
Network instability (Resolved)
  • Priority - Critical
  • Affecting System - All services
  • We are currently experiencing some network instability and our technicians are investigating as a matter of urgency.

    Update: The network has been stabalised and normal service restored. Please accept our aplogies for any inconvinence caused. If you have any quesitons, please get in touch with our support staff via the usual means.

  • Date - 22/09/2011 09:13 - 22/09/2011 09:24
  • Last Updated - 22/09/2011 09:28
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - Firmware update
  • We will be conducting a routine upgrade of the software on our border routers in BlueSquare House, Maidenhead on 15/09/2011 between 22:00 and 23:59. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be upgrading the software on each device individually and testing it before returning it to the pool of availble devices. Some users may see momentery packet loss or increased latency as traffic fails over between the two routers whist we remove, reboot and re-inserting each device.

    Update: The firmware update has been completed without incident

  • Date - 15/09/2011 22:00 - 15/09/2011 00:00
  • Last Updated - 16/09/2011 09:43
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 03/08/2011. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-scheduled for 18/08/2011

    Update: This work has been completed successfully

  • Date - 03/08/2011 22:00 - 03/08/2011 23:59
  • Last Updated - 18/08/2011 23:55
Network re-configuration (Resolved)
  • Priority - Medium
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • We will be undertaking some reconfiguration work on our Maidenhead routers and core switching between 21:00 on 31/07/2011 and 01:59 on 01/08/2011 in order to increase network capacity. During this time, there may be momentary periods of increased latency or packet loss as traffic is re-routed across other switches and routers.

    Due to the redundant, failover configuration of our network we do not expect there to be any noticable impact on customer connectivity, however we will be running with a reduced level of redundancy whilst this work takes place. We will be modifying the routers and core switches individually and testing the new configuration before returning production traffic to each device.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work was successfully completed without incident.

  • Date - 31/07/2011 21:00 - 01/08/2011 01:59
  • Last Updated - 04/08/2011 17:46
Software update (Resolved)
  • Priority - High
  • Affecting Server - TMA01/Japetus
  •  

    We will be performing a software update on TMA01/Japetus to upgrade the Parallels Plesk control panel from 9.5.4 to 9.5.4 between 22:30 and 23:30 on 25/07/2011. During this time services hosted on TMA01/Japetus may be intermitantly unavailable.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This has been completed successfully

     

  • Date - 25/07/2011 22:30 - 25/07/2011 00:00
  • Last Updated - 25/07/2011 22:53
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 06/07/2011. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect downtime to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

  • Date - 06/07/2011 22:00 - 06/07/2011 23:59
  • Last Updated - 25/07/2011 14:16
Scheduled UPS maintenance in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received a notification from BlueSquare Data that they will be carrying out routine maintenance on the N+1 UPS stack at the BlueSquare House in Maidenhead between 21:00 on the 27/06/2011 and 02:00 on the 28/06/2011.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be no redundancy on the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: This work has been completed without incident

  • Date - 27/06/2011 21:00 - 28/06/2011 02:00
  • Last Updated - 03/07/2011 09:12
At risk: Maidenhead network (Resolved)
  • Priority - Medium
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • BlueSquare Data have informed us that their fibre supplier will be carrying out fibre splicing work on one of the Maidenhead to London backhaul legs on the the 24th-25th and 25th-26th of June between 23:00 and 07:00 each day.

    This essential work is being undertaken due to railway construction taking place that will effect the existing fibre route. It is not expected that this will be service affecting due to the resilient nature of the backhaul network that serves BlueSquare

    All services hosted in BlueSquare House should be considered "at risk" during this time as there will be no further protection against fibre breaks on the remaining backhaul network leg.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work was sucesfully completed without incident

  • Date - 24/06/2011 23:00 - 26/06/2011 07:00
  • Last Updated - 27/06/2011 08:23
Internal e-mail server migration (Resolved)
  • Priority - High
  • Affecting System - Freethought Internet internal e-mail
  • We will be moving the internal Freethought Internet e-mail system to a new server on 23/06/2011 between 21:00 and 23:59, which will mean that we have no access to our Freethought e-mail accounts during this time, inculding the support@freethought-internet.co.uk e-mail address.

    This will not affect client services in any way, however it will mean that we are unable to receive support tickets etc. whilst the upgrade takes place. If you need support during this time, then please raise a ticket directly through the Freethought customer billing and support portal at https://portal.freethought-internet.co.uk

    Update: This work has now been completed and inbound e-mail is once again flowing normally. In some cases, messages may be delayed due to caching of MX records.

  • Date - 23/06/2011 21:00 - 23/06/2011 23:59
  • Last Updated - 24/06/2011 00:43
Internal e-mail server migration (Resolved)
  • Priority - High
  • Affecting System - Freethought Internet internal e-mail
  • We will be moving the internal Freethought Internet e-mail system to a new server on 31/05/2011 between 23:00 and 23:59, which will mean that we have no access to our Freethought e-mail accounts during this time, inculding the support@freethought-internet.co.uk e-mail address.

    This will not affect client services in any way, however it will mean that we are unable to receive support tickets etc. whilst the upgrade takes place. If you need support during this time, then please raise a ticket directly through the Freethought customer billing and support portal at https://portal.freethought-internet.co.uk

    Edit: Unfortunately we had to abort this work due to problems enountered and will be re-scheduling for a later date.

  • Date - 31/05/2011 23:00 - 31/05/2011 23:59
  • Last Updated - 10/06/2011 01:19
Router reboot (Resolved)
  • Priority - Medium
  • Affecting System - BSQ1-RT2
  • We will be performing a rebppt on one of our Maidenhead routers between 22:00 on 04/06/2011 and 23:59 on 04/06/2011 as part of our work to enable IPv6 connectivity across our network. During this time, there may be momentary periods of increased latency or packet loss as traffic is re-routed across other routers.

    Due to the redundant, failover configuration of our network we do not expect there to be any noticable impact on customer connectivity, however we will be running with a reduced level of redundancy whilst this work takes place.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-sceduled to take place on Thursday the 9th of June.

    Update: This work has been completed successfully

  • Date - 04/06/2011 23:00 - 04/06/2011 23:59
  • Last Updated - 10/06/2011 01:19
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 04/06/2011. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect down time to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-sceduled to take place on Thursday the 9th of June.

    Update: This work has been completed successfully

  • Date - 04/06/2011 22:00 - 04/06/2011 23:59
  • Last Updated - 09/06/2011 23:41
Generator servicing and maintenance (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • The diesel generator that provides long run time backup power to the BlueSquare House data centre is scheduled to be serviced on the 7th of June.

    During this maintenance, the generator will not automatically start should there be a loss of mains power to the site, however it may be possible to manually start the generator depending on the maintenance being performed at the time.

    This is not service affecting and the site is still protected by the N+1 UPS, however the batteries will not provide long term protection to ride through an extended power cut, as such the site should be considered "at risk" whilst the maintenance is taking please.

    We will provide updates throughout the maintenance soon as any extra information is passed to us be the BlueSquare NOC team. Please feel free to contact our support staff in the usual manner should you have any questions.

    Update: We have just been informed by BlueSquare Data that this work has now been re-scheduled for the 8th of June.

    Update: This work has been completed without incident

  • Date - 07/06/2011 00:00 - 07/06/2011 23:59
  • Last Updated - 08/06/2011 12:52
Packet loss at Kent Science Park F25 (Resolved)
  • Priority - High
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We are currently experiencing packet loss at the Kent Science Park F25 facility where Freethought Internet house our off-site backup and disaster recovery servers.

    Upstream network engineers are investigating the issue as a matter of priorty and we will provide an update as soon as we have more information.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP are unaffected. Customers hosted in Kent Science Park F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: Our upstream network provider has identified an issue with a router and is working with the vendor to resolve this as quickly as possible.

    Update: A device generating a massive amount of malicious traffic has been identified and isolated from our upstream provider's network, restoring normal service at approximately 14:35.

  • Date - 23/05/2011 14:11 - 23/05/2011 14:35
  • Last Updated - 23/05/2011 15:28
Internal e-mail server upgrade (Resolved)
  • Priority - High
  • Affecting System - Freethought Internet internal e-mail
  • We will be upgrading the internal Freethought Internet e-mail server on 25/04/2011 between 21:00 and 23:00, which will mean that we have no access to our Freethought e-mail accounts during this time, inculding the support@freethought-internet.co.uk e-mail address.

    This will not affect client services in any way, however it will mean that we are unable to receive support tickets etc. whilst the upgrade takes place. If you need support during this time, then please raise a ticket directly through the Freethought customer billing and support portal at https://portal.freethought-internet.co.uk

    Edit: This work has been delayed and will now take place between 00:00 and 02:00 on 26/04/2011

    Edit: This work has now been successfully completed and full e-mail service restored.

  • Date - 26/04/2011 00:00 - 26/04/2011 02:00
  • Last Updated - 26/04/2011 01:23
Network re-configuration (Resolved)
  • Priority - Medium
  • Affecting System - Network re-configuration
  • We will be undertaking some reconfiguration work on our Maidenhead routers and core switching from 22:00 on 16/03/2011 and 01:59 on 17/03/2011 in order to increase network capacity. During this time, there may be momentary periods of increased latency or packet loss as traffic is re-routed across other switches and routers.

    Due to the redundant, failover configuration of our network we do not expect there to be any noticable impact on customer connectivity, however we will be running with a reduced level of redundancy whilst this work takes place. We will be modifying the routers and core switches individually and testing the new configuration before returning production traffic to each device.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work is now underway.

    Update: This work has now been completed and full redundancy restored. Our monitoring shows no impact to customer traffic throughout this maintenance period.

  • Date - 16/04/2011 22:00 - 17/04/2011 01:59
  • Last Updated - 17/04/2011 01:10
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 23:59 on the 13/04/2011. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 20 minutes to complete and we expect down time to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been started and we will reboot the server as soon as the updates have applied.

    Update: The server is now being rebooted.

    Update: The server is back online and all services have been resumed. If you are having any problems with services hosted on this server, then please get in touch with our support staff in the usual manner.

  • Date - 13/04/2011 22:00 - 13/04/2011 23:59
  • Last Updated - 13/04/2011 23:59
Scheduled UPS maintenance in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received the following notification from BlueSquare with regards to equipment located in BlueSquare House, Maidenhead:

    Over the 11th & 12th of April we will be updating the capacitors in the BlueSquare:1 UPS systems, this work will be carried from 10:00hrs on both days.
    During this time BlueSquare:1 will have a reduced redundancy of N.

    The anticipated time of works will be 6 – 8 hours per day with full redundancy being resumed at the end of each day.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be no redundancy on the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: Today's UPS work has been completed without incident and full redundancy has been restored. Mainteancen work will re-commence tomorrow morning as scheduled.

    Update: TheUPS engineers have arrived back on site and maintenance work will re-commence shortly, during which time the BlueSquare House UPS will once again be running at reduced redundancy.

    Update: The maintenance work has been completed without incident.

  • Date - 11/04/2011 10:00 - 12/04/2011 20:00
  • Last Updated - 13/04/2011 23:53
Network re-configuration (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We will be undertaking some reconfiguration work on our Maidenhead routers and core switching from 22:00 on 12/03/2011 and 01:59 on 13/03/2011 in order to increase network capacity. During this time, there may be momentary periods of increased latency or packet loss as traffic is re-routed across other switches and routers.

    Due to the redundant, failover configuration of our network we do not expect there to be any noticable impact on customer connectivity, however we will be running with a reduced level of redundancy whilst this work takes place. We will be modifying the routers and core switches individually and testing the new configuration before returning production traffic to each device.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been re-scheduled to take place on 26/03/2011 at 22:00

    Update: This work has been completed. Further network capacity upgrade work will be scheduled in the next two weeks.

  • Date - 26/03/2011 22:00 - 27/03/2011 02:59
  • Last Updated - 27/03/2011 02:39
UPS inspection in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • Following the UPS incident in BlueSquare 2/3 yesterday, BlueSquare Data have informed us that they will be conducting a procautionary inspection of the BlueSquare House UPS on 19/03/2011 at 11:30. This is expected to take no more than 2 hours to complete and the UPS will be operating at full redundancy during this time.

    This work is not expected to be service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be engineers working with the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: The inspection work on the BlueSquare House UPS has been completed without incident and no problems have been found.

  • Date - 19/03/2011 11:30 - 19/03/2011 12:57
  • Last Updated - 19/03/2011 13:00
Packet loss at Kent Science Park F25 (Resolved)
  • Priority - Critical
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We are experiencing heavy packet loss at the Kent Science Park F25 facility where Freethought Internet house our off-site backup and disaster recovery servers.

    Upstream network engineers are investigating the issue with third party circuit and equipment providers and we will provide an update as soon as we have more information.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP are unaffected. Customers hosted in Kent Science Park F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: Our upstream network provider in Kent Science Park has resolved the packet loss issues and normal service has been resumed. We are continuing to monitor the network in case of any re-occuring issues. Please don't hesitate to get in touch with our support staff via the usual methods if you have any questions or concerns.

    Update: We are currently seeing a re-occurance of the earlier packet loss issues. This is being investigated with our upstream network provider.

    Update: The network in KSP has once again stabalised and we are continuing to monitor the situation.

    Update: The network has remained stable over night and we have been promised that we will be furnished with a full RFO first thing next week.

  • Date - 11/03/2011 19:56 - 11/03/2011 20:15
  • Last Updated - 12/03/2011 08:33
Scheduled ATS testing in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have been notified by BlueSquare that they will be carrying out testing on the ATS (Automatic Transfer Switch) in BlueSquare House between 19:00 and 23:00 on 10/03/2011. This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out.

    We will issue an all clear notification once we have confirmed that the work has been completed. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: The ATS maintenance work has now been completed without incident. Please consider this an all-clear notification.

  • Date - 10/03/2011 19:00 - 10/03/2011 23:00
  • Last Updated - 10/03/2011 23:09
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead routers
  • We will be conducting a routine upgrade of the software on our border routers in BlueSquare House, Maidenhead on 08/03/2011 between 23:00 and 23:59. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be upgrading the software on each device individually and testing it before returning it to the pool of availble devices. Some users may see momentery packet loss or increased latency as traffic fails over between the two routers whist we remove, reboot and re-inserting each device.

    Update: The firmware upgrade was completed without incident.

  • Date - 08/03/2011 23:00 - 08/03/2011 23:59
  • Last Updated - 09/03/2011 00:45
Network Maintenance in Kent Science Park F25 (Resolved)
  • Priority - Medium
  • Affecting System - All DR/backup services in KSP F25
  • Our upstream network provider in Kent Science Park F25 has informed us that as they are intending to undertake some network maintenance on the 16th of February 2011 from 23:00 to 23:59 that will require a brief loss of connectivity whilst subnets are re-routed to newly installed network devices. This will result in a service interruption of roughly 2-3 minutes per subnet.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP are unaffected. Customers hosted in Kent Science Park F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: This work has been completed without incident

  • Date - 16/02/2011 23:00 - 16/02/2011 23:59
  • Last Updated - 07/03/2011 14:07
Edge switch reload (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead edge switch 2
  • We will be carrying out a scheduled reload of Maidenhead edge switch 2 on 06/02/2011 at 01:00. This will cause an outage of approcimately 2-3 minutes for customers directly connected to this switch.

    Data centre staff will be on-site to complete this reboot as well as provide any extra assistance needed should we encounter any problems and we have a spare switch ready on-site should it be required.

    We are not anticipating any problems with this reboot and we would like to re-assure you that this is a routine piece of maintenance work. The forwarding and management operations of the switch are currently functioning as normal.

    All affected customers have been contacted individually. If you believe that you should be affected by this and have not received any notification messages from our support desk, them please feel free to contact our support staff via the usual means.

    Update: In light of the power issues experienced in BlueSquare House last night, this work has been rescheduled for the same time next weekend. It will now take place on the 16th of February.

    Update: This work has been cancelled. There was no impact on client services.

  • Date - 13/02/2011 01:00 - 13/02/2011 01:30
  • Last Updated - 15/02/2011 11:48
Network outage (Resolved)
  • Priority - Critical
  • Affecting System - All Maidenhead services
  • We are currently experiencing a complete loss of connectivity to all systems in BlueSquare House via all three of our upstream network providers. This appears to also be affecting other, independent providers in BlueSquare House. We are investigating this as a matter of urgency and will post updates here as soon as we have them.

    Update: We have received confirmation of a complete loss of power in BlueSquare House. BlueSquare staff are on-site and investigating.

    Update: We seem to have power back to our equipment and some servers are already back up. We are now going through servers one by one to restore connectivity.

    Update: Power was restored at approximately 23:45 and majority of equipment has returned to normal. We are working through remaining servers one by one to restore them to normal operation. If your services are still unavailable, please raise a ticket and our support staff will investigate as soon as possible. Please accept our apologies for the inconvenience caused by this outage, we will be providing a full RFO as soon as we receive details from the BlueSquare House facility management.

    Update: We have received an RFO from the facility management:

    At 22:54 workmen installing a new water main in the Maidenhead area cut through one of two main HV feed¢s supplying BlueSquare House. Utility power dropped to the site, and the generators started as expected. However an electrical interlock fault with the Automatic Transfer Switch (ATS) failed to transfer the load to the generator supply. At 23:02 the UPS batteries discharged and the load was dropped to the data floor. After a manual override by our onsite engineers power was restored via the generators and the critical load was recovered. After further testing of the ATS system and interlocks we switched back to full mains supply (via the secondary HV feed) at 01:26. Further diagnostics were carried out on the electrical interlock without any faults found.

    The facility management will be condcting further scheduled tests of the ATS over the next week to try and identify the root cause of the problem. We will issue a scheduled maintenance notice when we receive details of this work.

  • Date - 03/02/2011 23:09 - 03/02/2011 23:45
  • Last Updated - 04/02/2011 15:20
Reduced network redundancy (Resolved)
  • Priority - High
  • Affecting System - All Maidenhead services
  • Following the power outage affecting equipment in BlueSquare House we are currently running with reduced network redundancy whilst one of our upstream providers works on problems with their network and so services should be considered "at risk" until this provider brings their network back online and full redundancy is restored.

    This is not service affecting and is being issued as a cautionary advisory. If you have any problems then please do not hesitate to contact our support staff in the usual manner.

    Update: Full network redundancy has been restored.

  • Date - 04/02/2011 02:47
  • Last Updated - 04/02/2011 08:51
File system check (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-Plesk1
  • TMA02/Enigma is currently running a filesystem check after the unexpected loss of power this evening. All services on TMA02/Enigma are currently unavailable, however normal service will be resumed as soon as this check has finished. Please accept our apologies for the inconvenience.

    Update: TMA02/Enigma is now back online and normal service has been resume. Please accept our apologies for the inconvenience caused by this outage and do not hesitate to get in touch with our support staff if you are still experiencing any issues.

  • Date - 04/02/2011 01:37 - 04/02/2011 00:00
  • Last Updated - 04/02/2011 02:05
Internal e-mail server upgrade (Resolved)
  • Priority - High
  • Affecting System - Freethought Internet internal e-mail
  • We will be upgrading the internal Freethought Internet e-mail server on 06/01/2011 between 19:00-21:00, which will mean that we have no access to our Freethought e-mail accounts during this time, inculding the support@freethought-internet.co.uk e-mail address.

    This will not affect client services in any way, however it will mean that we are unable to receive support tickets etc. whilst the upgrade takes place. If you need support during this time, then please raise a ticket directly through the Freethought customer billing and support portal at https://portal.freethought-internet.co.uk

    Update: The mail server software upgrade has been completed sucesfully and normal e-mail functionality has been restored.

  • Date - 06/01/2011 19:00 - 06/01/2011 21:00
  • Last Updated - 06/01/2011 20:11
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 12:30 and 13:30 on the 27/12/2010. This will require us to reboot the server at least once and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance period has begun and we are currently waiting for the updates to complete. The server is currently online but some services have been stopped in order to unlock system files that need to be modified by the update process. Once the update completes we will reboot the server and restore all services to their normal running state.

    Update: The update process has finished upgrading the various components and is cleaning up after itself, however this is taking a long time as it has tens of thousands of session files to delete. In the mean time, all services should have been restored. Please bear with us whilst we wait for the updater to finish. Once these files have been deleted and the cleanup process is complete then we will reboot the server to finish the updates.

    Update: TMA01/Japetus has been rebooted successfully and normal services have been fully restored.

  • Date - 27/12/2010 12:30 - 27/12/2010 14:15
  • Last Updated - 27/12/2010 14:22
BGP filter reconfiguration (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We will be re-configuring the BGP filters on our Maidenhead border routers on 29/11/2010 between 21:00 and 23:59. During this time, there may be brief periods of interruption as traffic re-routes across other transit sessions.

    Due to the redundant, failover configuration of these devices and the multiple upstream BGP sessions we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy whilst each BGP session is taken down and the filter rules are modified. We will be modifying the filters on each BGP session individually and testing it before returning it to the routing table.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: We have successfully completed the BGP filter maintenance work and changed all BGP sessions to use these new filters without any impact on any Maidenhead services. These new filters will provide us with even more flexibility and control over our routing as well as preparing for our upgrade to IPv6.

  • Date - 29/11/2010 21:00 - 29/11/2010 23:59
  • Last Updated - 29/11/2010 22:15
At risk: Maidenhead network (Resolved)
  • Priority - High
  • Affecting System - All services hosted in BlueSquare Maidenhead
  • We have been advised by one of our layer 2 backhaul suppliers that they will be carrying out some unscheduled re-cabling work in Telehouse that will affect the circuits to one of our transit providers as of 26/11/2010 at 13:00. As such, we have manually shut down all BGP sessions to this transit provider in order to gracefully re-route arround this maintenance.

    Due to the redundant, multi-homed nature of our network this should not be service affecting for any cusotmers, however we will be running with reduced redundancy whilst this circuit is down for mainteance.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Layer 2 connectivity has been restored and we are awaiting an all-clear notification before bringing the BGP sessions back up.

    Update: We have received an all-clear notification from our layer 2 backhaul provider and so have bought our BGP sessions to the affected transit provider back up. The Maidenhead network is now fully redundant again. There was no impact on any Maidenhead services throughout this maintenance period.

  • Date - 26/11/2010 13:00
  • Last Updated - 27/11/2010 13:15
At risk: Kent Science Park F25 (Resolved)
  • Priority - Medium
  • Affecting System - KSP F25 to Telehouse North network infrastructure
  • Our upstream network provider in Kent Science Park F25 has informed us that as they have lost connectivity on a fibre link from the Kent Science Park F25 facility to Telehouse North at approximately 15:51. All traffic has been automatically re-routed via the Telehouse East path of the 10Gbps fibre ring whilst an engineer is en-route to replace a faulty line card at an optical signal regeneration site which has been identified as the cause of the loss of signal.

    The customer facing network in Kent Science Park F25 is operating normally with no packet loss, however as the Kent Science Park F25 to Telehouse North leg has been removed from the ring, the network is currently opperating with reduced resilience and so should be considered "at risk" until the Telehouse North path is returned to operation and full redundancy is restored to the network.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP are unaffected. Customers hosted in Kent Science Park F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: Full redundancy was restored without any interruption to customer services with the replacement of the faulty line card at the optical regeneration site.

  • Date - 11/10/2020 16:20 - 11/10/2010 20:55
  • Last Updated - 16/11/2010 23:40
BlueSquare network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead network
  • BlueSquare have notified us that they are intending to install an additional switch onto their redundant layer 2 bakchaul ring network on 11/11/2010 at 22:00.

    This should not be service affecting, however it will require a tempoary break in the redundany leg of the backhaul ring, and so the ring will be at-risk whilst this work is taking place.

    We will issue an all clear once the maintenance work has been completed, in the mean time please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: We have received an all-clear notification from BlueSquare with regards to the installation of this switch. The install work has been completed without incident.

  • Date - 11/11/2010 22:00 - 12/11/2010 01:00
  • Last Updated - 11/11/2010 22:59
Electrical maintenance work at Kent Science P (Resolved)
  • Priority - Medium
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We have been advised that the electricity provider at the Kent Science Park needs to conduct some maintenance work on the 11kV HV mains feed that suppliers the park and so power will be switched off to the park intermittently throughout the day whilst this work is carried out so that the electrical engineers can work in a safe environment.

    We are not anticipating this to have any impact on service throughout the maintenance window as the on-site generators will handle the periods without mains power as designed and the N+1 UPS will provide ride through power whilst engaging and disengaging the generator sets. However, due to the loss of power redundancy at the site, we are considering the facility to be "at risk" during the period where the generators will be the only available power source.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP will be completely unaffected by this work. Customers hosted in Kent Science Park F25 should contact our support staff in the usual manner if they have any questions about this maintenance work.

    Update: This maintenance work has been cancelled by the 11kV supplier and will be re-shceduled at a later date

  • Date - 13/11/2010 08:00 - 13/11/2010 18:00
  • Last Updated - 10/11/2010 15:56
Brief network interruption (Resolved)
  • Priority - High
  • Affecting System - Maidenhead network
  • We have experienced two brief network interruptions on some external traffic destined for the Maidenhead network. Each of these interruptions lasted less than a minute and only affected external connectivity passing over one of the multiple redundant transit feeds from one of our upstream transit providers.

    We are investigating the cause of this issue and monitoring the network closely in case this problem re-occurs. Currently we are running with all transit providers, however if we continue to experience problems then we will tempoerily remove connectivity to the affected transit provider until the root cause can be pinpointed.

    Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: We have expiernced no further issues so far, however we are continuing to investigate the underlying cause of the issue that we experienced.

    Update: We are continuing to investigate and monitorhave the situation, however we have expiernced no further problems with connectivity via the affected upstream transit provider so far.

    Update: We have been carefully monitoring external connectivity to AS41000 and so far have seen no re-occurance of this issue. We are continuing to work with third party suppliers in order to track down the cause of these two brief network interruptions.

    Update: We have been continuing to monitor the network status and have not seen any further issues. We are continuing to work with the relevant third party suppliers to investigate possible causes of these two brief network interruptions.

  • Date - 04/11/2010 14:26
  • Last Updated - 06/11/2010 20:02
Scheduled UPS maintenance in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received the following notification from BlueSquare with regards to equipment located in BlueSquare House, Maidenhead:

    Please be aware that our UPS manufactures will be carrying out a routine service of the UPS systems in both BlueSquare House and BlueSquare 2 & 3.

    This work will reduce the redundancy level of the UPS system from N+1 to 'N' during the times below. UPS protection will be available throughout the service period as normal.

    The service work will be conducted as follows:
    - BlueSquare House - 09:30 until 16:30 on Thursday 21st October 2010
    - BlueSquare 2&3 - 09:30 until 16:30 on Friday 22nd October 2010

    Whilst the work taking place is non-intrusive and UPS protection will still be available at all times, customers should however treat this as an at risk period.

    An all clear update will be provided as soon as works have been completed each day.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be no redundancy on the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: This work has been completed without incident.

  • Date - 21/10/2010 09:30 - 21/10/2010 16:00
  • Last Updated - 04/11/2010 16:44
Virgin Media routing issues (Resolved)
  • Priority - High
  • Affecting System - All servers on 194.110.243.0/24
  • Customers of Virgin Media seem to be experiencing problems accessing servers hosted on the 194.110.243.0/24 IP range due to routing issues inside the Virgin Media network. We are currently investigating this with Virgin Media and will post an update as soon as we have any further information.

    Update: Normal routing appears to have returned for Virgin Media customers. We will continue to monitor and seek a detailed Reason For Outage report from Virgin Media.

    Update: We haven't seen any further problems with access to 194.110.243.0/24 from Virgin Media, however Virgin Media have so far been unable to provide us with an explenation of the routing issues that their customers experienced this morning.

  • Date - 21/10/2010 07:18 - 21/10/2010 08:51
  • Last Updated - 21/10/2010 11:00
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 22:30 on the 16/10/2010. This will require us to reboot the server so there will be a loss of service whilst this is done. The maintenance work itself should take less than 10 minutes to complete and we expect down time to be much less than this.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been canceled due to being completed as part of other maintenance work

  • Date - 16/10/2010 22:00 - 16/10/2010 22:30
  • Last Updated - 16/10/2010 18:55
Reduced network redundancy (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have been notified by an upstream network provide that they will be performning maintenance on one of our IP transit feeds on 15/10/2010 between 23:00 and 23:59.

    During this time, we will have reduced network redundancy, although we will still have use of another transit feed from this provider which connects to another PoP as well as a second, completely independant vendor

    We do not anticipate any outages as a result of this work, however there may be a momentary disrutption to traffic when the maintenance starts as traffic is re-routed over alternative paths.

    Please feel free to contact our support staff in the usual manner should you have any questions.

    Update: Full network redundancy has been restored without incident

  • Date - 15/10/2010 23:00 - 15/10/2010 23:59
  • Last Updated - 16/10/2010 01:30
Packet loss at Kent Science Park F25 (Resolved)
  • Priority - High
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We are experiencing some packet loss at the Kent Science Park F25 facility where Freethought Internet house our off-site backup and disaster recovery servers.

    Upstream network engineers are investigating the issue with third party circuit and equipment providers and we will provide an update as soon as we have more information.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP are unaffected. Customers hosted in Kent Science Park F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: A faulty distribution switch has been disconnected from our upstream provider's network to resolve the packet loss. Please accept our apologies for any inconvenience caused. If you continue to experience any packet loss to services hosted in the Kent Science Park F25 facility.

  • Date - 02/10/2010 14:04
  • Last Updated - 02/10/2010 17:10
Scheduled UPS maintenance in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received the following notification from BlueSquare with regards to equipment located in BlueSquare House, Maidenhead:

    Please be informed that our UPS manufacturer will be performing a routine service on the UPS systems in the above buildings. During this time we will loose N+1 redundancy, but still be protected by the UPS systems. One UPS system will be taken out of service, serviced, and then put back into full working service at a time. We expect each service to take between 1 and 1.5 hours.

    BlueSquare House will be serviced on Tuesday 28th of September, and BlueSquare 2/3 will be serviced on Wednesday 29th September.

    All service work will commence at 9.30.

    We will issue an all clear for each day once full N+1 redundancy has been restored.

     

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there will be no redundancy on the UPS equipment during this period. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: This work has been cancelled and will be rescheduled during October.

  • Date - 28/09/2010 09:30 - 28/09/2010 11:00
  • Last Updated - 02/10/2010 17:10
Fibre optic cabling work in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received the following notification from BlueSquare with regards to equipment located in BlueSquare House, Maidenhead:

    During the following at-risk period we will be installing additional fibre optic cables from the Windsor road into BlueSquare House and BlueSquare two in Maidenhead.

    Start: 2010-10-02 07:30 (BST)
    End: 2010-10-02 19:30 (BST)

    These works will be undertaken in only one of our diverse ducts so in the unlikely event of a fibre break occurring we will automatically re-route the traffic over the other path.

    Throughout these works there will be some disruption to the BlueSquare House reception whilst our contractors bring the fibre in. Please bear with us if you are on-site during this period.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst this maintenance is carried out as there is the potential for loss of redundancy on the  BlueSquare fibre ring. We will provide an update if we receive any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: We have received confirmation that this work has been completed without incident.

  • Date - 02/10/2010 07:30 - 02/10/2010 00:00
  • Last Updated - 02/10/2010 11:37
Emergency edge switch reboot (Resolved)
  • Priority - Critical
  • Affecting System - Maidenhead edge switch 1
  • We need to conduct an emergency reboot of edge switch 1 in BlueSquare House, Maidenhead on 23/09/2010 between 01:00 and 02:00 due to a failure of the switch management interfaces. This will cause an outage for directly connected customers for 1-2 minutes whilst the switch reboots.

    The failure of the management interface is not currently affecting the ability of the switch to forward traffic, it is just affecting our ability to manage the switch remotely and collect data via SNMP. If the switch deteriorates further then we will need to bring this maintenance forward. We will post any further status updates here.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been delayed until 24/09/2010 between 01:00 and 02:00 due to other work taking place on servers connected to this switch.

    Update: The switch has been rebooted and normal service has been restored to all affected customers. Please contact our support staff in the usual manner if you are having any problems.

  • Date - 23/09/2010 01:00 - 23/09/2010 02:00
  • Last Updated - 24/09/2010 01:18
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead edge switch 1
  • We will be conducting a routine upgrade of the software on edge switch 1 in BlueSquare House, Maidenhead on 18/09/2010 between 09:30 and 10:00 ahead of the installation of a new core switching infrastructure. This will cause an outage for directly connected customers for 1-2 minutes whilst the switch reboots.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This has been delayed slightly by other work that needs to be carried out on-site and will be undertaken between 15:30 and 16:00

    Update: This work has now been completed.

  • Date - 18/09/2010 15:30 - 18/09/2010 16:00
  • Last Updated - 20/09/2010 22:16
New switch installation (Resolved)
  • Priority - High
  • Affecting System - Maidenhead core switching network
  • We will be installing new core switches on to our network in BlueSquare House, Maidenhead on 18/09/2010 between 10:00 and 14:00. The switches will be installed and tested thoroughly before being connected to the production network. It is expected that this work will cause brief outages for all customers, each lasting less than a minute whilst each of the edge switches are moved to the new core switches and spanning tree re-converges.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This has been delayed slightly by other work that needs to be carried out on-site and will be undertaken between 14:00 and 18:00

    Update: This maintenance was cancelled due to time contraints. We will re-schedule this for a later date.

  • Date - 18/09/2010 14:00 - 18/09/2010 18:00
  • Last Updated - 18/09/2010 22:05
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-cPanel1
  • We will be performing scheduled maintenance including essential software updates on TM03/Tsung between 22:00 and 22:59 on the 15/09/2010. This will require us to reboot the server at least once and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Due to investigating issues with an upstream network provider, maintenance work on TMA03 was delayed in starting and so is now due to be completed by 00:30 on 16/09/2010.

    Update: The system and cPanel/WHM updates are complete. We are now updating Apache and PHP. Once this is done, the server will be rebooted into the new Linux kernel.

    Update: TMA03/Tsung has failed to start correctly after rebooting to load the new Linux kernel. Our technicians are currently investigating.

    Update: The problem appears to be due to a corruption of stage 1 of the GRUB bootloader stored in the Master Boot Record (MBR). We are currently working to re-install GRUB.

    Update: The GRUB bootloader has been successfully re-installed and TMA03/Tsung has been started into the new Linux kernel with no further problems. All services have been restored and the total downtime experienced by clients hosted on TMA03/Tsung was just under an hour. Please accept our apologies for the inconvenience caused and don't hesitate to contact our support staff if you have any questions or you are experiencing any problems with services hosted on TMA03/Tsung.

  • Date - 15/09/2010 22:00 - 15/09/2010 22:59
  • Last Updated - 16/09/2010 01:38
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance including essential software updates on TM02/Enigma between 23:00 and 23:59 on the 15/09/2010. This will require us to reboot the server at least once and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: Due to the delay in starting work on TMA03/Tsung, the maintenance on TMA02/Enigma has been pushed back and will start in the next 15-30 minutes.

    Update: Due to issues experienced during the work on TMA03/Tsung, the maintenance on TMA02/Enigma will be re-scheduled to take place at a later date.

  • Date - 15/09/2010 23:00 - 16/09/2010 00:48
  • Last Updated - 16/09/2010 00:49
Upstream network provider failover to backup (Resolved)
  • Priority - High
  • Affecting System - All Maidenhead services
  • One of our upstream providers seems to have re-routed some network traffic via a backup link. This should not be service affecting, however we are monitoring the situation in case we see a deterioration in service and have to intervene to re-route traffic via alternative providers. For now we are maintaining full upstream provider redundancy throughout the network.

    Update: We have confirmed with our upstream provider that they are investigating a switching issue, but in the mean time all traffic is flowing normally via their backup link. We will continue to monitor the situation and can fail over to our own redundant providers at a moments notice if we being to experience any issues that we believe are network related.

    Update: Our upstream provider has isolated the cause of this issue and it has been resolved. All traffic is now flowing via the primary links. Our technicians will continue to monitor to ensure that there are no regressions or re-occurances.

  • Date - 15/09/2010 23:23 - 16/09/2010 00:10
  • Last Updated - 16/09/2010 00:33
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance including essential software updates on TM01/Japetus between 22:00 and 22:59 on the 15/09/2010. This will require us to reboot the server at least once and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: All work on TMA01/Japetus has been completed and all service has been restored. There were two brief outage periods each lasting around 2-3 minutes.

  • Date - 15/09/2010 22:00 - 15/09/2010 22:59
  • Last Updated - 15/09/2010 23:19
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead routers
  • We will be conducting a routine upgrade of the software on our border routers in BlueSquare House, Maidenhead on 09/09/2010 between 23:00 and 23:59. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be upgrading the software on each device individually and testing it before returning it to the pool of availble devices. There will be approximately 10-15 seconds of interruption to connectivity when removing and re-inserting each device.

    Update: This work has been successfully completed with no downtime. Both routers are now running the latest firmware and full redundancy has been restored. Please contact our support staff in the usual manner if you are having any problems.

  • Date - 09/09/2010 23:00 - 09/09/2010 23:59
  • Last Updated - 09/09/2010 22:55
BGP maintenance (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • Following on from the scheduled maintenance taking place on the 10th of August 2010, we will be carrying out further maintenance on the BGP sessions to one of our other upstream network providers on the 11th of August 2010 at 19:00

    This should not affect our ability to pass traffic to this upstream provider as we will be working on each BGP of their multiple redundnat sessions seperately. If there are any problems encountered during this maintenance, then all traffic will automatically be re-routed via our alternative providers whilst this maintenance is taking place.

    Because of this redundancy there is minimal chance of any impact on Maidenhead services, but please consider any devices hosted on the Maidenhead campus 'at risk' during this time.

    This work is expected to take approximately 5-10 minutes to complete, but we are scheduling a 30 minute maintenance window in order to give us time to troublehsoot if neeed.

    Customers may see brief increases in latency or momentary packet loss whilst traffic is re-routed, but this should not be service affecting in any way. Please contact our support staff in the usual manner if you have any questions or concerns.

    Update: This work has now been started

    Update: This work has now been completed

  • Date - 11/08/2010 19:00 - 11/08/2010 19:30
  • Last Updated - 11/08/2010 19:09
BGP maintenance (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We will be carrying out maintenance on the BGP sessions to one of our upstream providers on the 10th of August 2010 at 18:00. This will involve a controlled shutdown of the BGP sessions to this upstream provider so that they can be migrated to other devices. All traffic will automatically be re-routed via our alternative providers whilst this maintenance is taking place, but Maidenhead services should be considered 'at risk' during this time.

    Customers may see brief increases in latency or momentary packet loss whilst traffic is re-routed, but this should not be service affecting in any way. Please contact our support staff in the usual manner if you have any questions or concerns.

    Update: This works has now started

    Update: This has now been completed

  • Date - 10/08/2010 18:00 - 10/08/2010 00:00
  • Last Updated - 10/08/2010 18:42
Intermittent issues on TMA03/Tsung (Resolved)
  • Priority - Critical
  • Affecting Server - LDeX1-cPanel1
  • We are currently experiencing problems with some services on TMA03/Tsung. Our technicians are investigating and will update this post as soon as possible.

    Update: The file system containing the operating system on TMA03/Tsung appears to be damaged and is preventing backups from being restored directly onto the server. Technicians are currently restoring backups on to another server ready to be copied over. Services on TMA03/Tsung will be intermittent until these backups have been restored. We hope that this will not affect the integrity of customer data, all of which is housed on a separate file system.

    Update: All operating system files have now been restored from the 1PM backups and we are currently correcting the permissions on these files.

    Update: FTP, MySQL and POP3/IMAP access has now been restored. All services are running normally, however we are still conducting maintenance work on the server so may need to restart any of the processes providing these services which will result in a brief outage.

  • Date - 17/07/2010 14:00 - 17/07/2010 18:45
  • Last Updated - 17/07/2010 18:46
At risk: BlueSquare fibre ring (Resolved)
  • Priority - Low
  • Affecting System - Telehouse East to Telecity Harbour Exchange 8/9
  • BlueSquare have advised us that they have seen a large change in the attenuation on the Telehouse East (London) to Telecity Harbour Exchange 8/9 (London) leg of their fibre ring. This is not currently service affecting and due to the ring nature of the BlueSquare topology, traffic between these sites can be re-routed on an alterative path around the fibre ring.
    Engineers have been dispatched by BlueSquare to check the fibre circuit and perform tests. We will provide updates here as soon as we have any further information.

    Update: BlueSquare have completed their testing and replaced deffective patch cables. Attenuation levels are now normal and BlueSquare continue to monitor the fibre ring. This work was not service affecting.

  • Date - 08/07/2010 11:25 - 08/07/2010 16:50
  • Last Updated - 08/07/2010 17:08
At risk: Kent Science Park F25 (Resolved)
  • Priority - Medium
  • Affecting System - KSP F25 to Telehouse North network infrastructure
  • Our upstream network provider in Kent Science Park F25 has informed us that as they have been seeing packet loss on the fibre link from the Kent Science Park F25 facility to Telehouse North this morning, they have forced all traffic to be re-routed via the Telehouse East path of the fibre ring whilst they investigate the cause of this packet loss.
     
    The customer facing network in Kent Science Park F25 is operating normally with no packet loss, however as the Kent Science Park F25 to Telehouse North leg has been removed from the ring, the network is currently opperating with reduced resilience and so should be considered "at risk" until the Telehouse North path is returned to operation and full redundancy is restored to the network.

    The Kent Science Park F25 facility is where Freethought Internet house our off-site backup and disaster recovery servers as well as a minority of client servers. All services in our primary Maidenhead PoP are unaffected. Customers hosted in Kent Science Park F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: The circuit carrier has confirmed that they are seeing errors on the terminating equipment of the affected circuit located in Telehouse North and so have despatched an engineer to Telehouse in order to check and clean the fibres. If the circuit passes extensive tests once this work is completed then it will be re-introduced into the network to once again complete the ring and restore full redundancy to the Kent Science Park F25 facility network.During these remedial works, the Kent Science Park F25 facility network continues to be considered at-risk, however all services are still operating normally and without packet loss over the Telehouse East leg of the fibre ring.

    Update: An optical transceiver in Telehouse has been replaced for the affected circuit which has resolved the issue. Full redundancy has been restored to the Kent Science Park F25 facility network and as such the at risk notification no longer applies.

  • Date - 29/06/2010 10:55 - 00/00/0000
  • Last Updated - 08/07/2010 11:31
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - TMA01/Japetus
  • We will be performing scheduled maintenance on TM01/Japetus between 22:00 and 23:30 on the 26/06/2010. This will require us to reboot the server and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to the backup agent software installed on the server required in order to ensure the ongoing security of the data housed on this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This work has been completed and all services were restored in under 2 minutes. If you are still unable to access any services hosted on TMA01/Japetus then please contact our support staff via the usual methods.

  • Date - 26/06/2010 22:00 - 26/06/2010 23:30
  • Last Updated - 27/06/2010 23:13
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing scheduled maintenance on TM02/Enigma between 22:00 and 23:30 on the 26/05/2010. This will require us to reboot the server at least once and so there will be a loss of service whilst this is done. We are unsure exactly how long the maintenance will take, but expect intermittent service interruptions to last up to an hour.

    This is essential maintenance to ensure the ongoing stability and security of this server. Please accept our apologies for any inconvenience that this will cause and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This has been rescheduled for 12/06/2010 between 12:00 and 19:00

    Update: We were unable to start the maintenance work during the scheduled time slot, so this has now been rescheduled for 21/06/2010 between 21:00 and 23:00. There has been no service affecting work carried out so far.

    Update: This maintenance has been cancelled. No services on TMA02/Enigma have been affected.

  • Date - 21/06/2010 21:00 - 21/06/2010 23:00
  • Last Updated - 22/06/2010 01:06
Hardware maintenance (Resolved)
  • Priority - Low
  • Affecting System - Secondary DNS
  • We will be undertaking some hardware maintenance on the sever housing our secondary DNS functions on 20/06/2010 from 22:00. This work is likely to take 1-2 hours to complete and requires the server to be powered down throughout, however the primary and tertiary DNS servers will still be answering all authoritative DNS queries during this time so service should not be affected in any way.

    Please contact our technical support team in the usual manner if you have any questions or concerns about this maintenance work.

    Update: Secondary DNS services have been restored. If you are having any problems querying our secondary DNS server, then please get in touch with our support staff.

  • Date - 20/06/2010 22:00 - 20/06/2010 23:59
  • Last Updated - 21/06/2010 09:53
Packet loss (Resolved)
  • Priority - Critical
  • Affecting System - All Maidenhead services
  • We experienced some periods of packet loss on our network this evening. This started around 17:40 and the last alert received was around 18:10, however the packet loss was intermittent throughout this period and would have shown itself through periods of slowdown when accessing services hosted on the AS41000 network.

    This was an issue present across both of our upstream network providers, which we believe was due to a Distributed Denial of Service (DDoS) attack on a common upstream network provider that that they in turn share. We are awaiting official Reason For Outage report and will update this post as soon as we receive it.

    Update: We have received confirmation from both of our upstream providers that this was an issue with a single network provider that they both use for part of their network capacity. This would only have affected customers passing through this common upstream provider with a small amount of packet loss and was quickly resolved by the provider in question. Please contact our support staff in the usual manner if you have any questions or concerns.

  • Date - 15/06/2010 17:40 - 15/06/2010 18:10
  • Last Updated - 15/06/2010 21:53
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead edge switch 1
  • We will be conducting a routine upgrade of the software on edge switch 1 in BlueSquare House, Maidenhead on 15/05/2010 between 22:00 and 22:30. This will cause an outage for directly connected customers for 1-2 minutes whilst the switch reboots.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This maintenance work has been delayed until 22/05/2010 between 22:00 and 22:30

    Update: We are currently unable to connect to this device via the console port, which limits our ability to roll back should the update fail so this maintenance is once again being postponed.

    Update: Technicians will be on-site on 12/06/2010 to resolve the console port issue and upgrade the firmware. This is currently scheduled to occur between 12:00 and 19:00 depending on technician availability. As before, the outage will be 1-2 minutes whilst the device reboots and a spare device is on-site and pre-configured to be swapped out should it be required.

    Update: This maintenance has been cancelled in order to avoid disruption to clients and will be re-scheduled at a later date.

  • Date - 12/06/2010 12:00 - 12/06/2010 19:30
  • Last Updated - 13/06/2010 09:58
Router replacement (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We will be replacing router 1 and router 2 in BlueSquare House, Maidenhead on 12/06/2010 between 12:00 and 19:00. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be introducing the new routers into the VRRP failover cluster individually and testing it before removing the old routers from the pool of availble devices. There will be approximately 10-15 seconds of interruption to connectivity when removing and re-inserting each device as VRRP fails over between nodes and BGP sessions move over.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: We have begun swapping over the routers, so you may notice brief periods of packet loss whilst we change over

    Update: The new BSQ1-RT2 device has been installed and we are preparing to switch over BSQ1-RT1.

    Update: The new BSQ1-RT1 device has been successfully installed and the network is now stable once again. Please contact support in the normal manner if you are still experiencing any network related issues.

  • Date - 12/06/2010 12:00 - 12/06/2010 19:00
  • Last Updated - 12/06/2010 17:16
Firewall cluster maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Customers behind managed firewall cluster 1 in Maidenhead
  • We will be performing hardware maintenance on our Maidenhead hardware firewall cluster on 12/06/2010 between 12:00 and 19:00. This will affect shared hosting customers as well as dedicated server and co-location customers that have a managed firewall service hosted on the shared hardware firewall cluster. Dedicated server and co-location customers that do not have a managed firewall service will not be effected by this maintenance.

    Each of the nodes in the cluster will be gracefully removed from the cluster and miantenance performed individually before being returned to the cluster once they have been rebooted, so we do not anticipate any down time as a result of this maintenance.

    If you have any questions about this maintenance, please raise a support ticket in the usual manner.

    Update: This maintenance has now been completed without any downtime or other impact to customers. Please contact our support staff in the usual manner if you are having any problems with a service hosted behind this cluster.

  • Date - 12/06/2010 12:00 - 12/06/2010 19:00
  • Last Updated - 12/06/2010 14:39
Generator maintenance (Resolved)
  • Priority - Low
  • Affecting System - All Maidenhead services
  • The diesel generator that provides long run time backup power to the BlueSquare House data centre is scheduled to be serviced on the 9th of June.

    During this maintenance, the generator will not automatically start should there be a loss of mains power to the site, however it may be possible to manually start the generator depending on the maintenance being performed at the time.

    This is not service affecting and the site is still protected by the N+1 UPS, however the batteries will not provide long term protection to ride through an extended power cut, as such the site should be considered "at risk" whilst the maintenance is taking please.

    We will provide updates throughout the maintenance soon as any extra information is passed to us be the BlueSquare NOC team. Please feel free to contact our support staff in the usual manner should you have any questions.

    Update: This work has now been completed without any service affecting interuptions

  • Date - 09/06/2010 00:00 - 09/06/2010 00:00
  • Last Updated - 11/06/2010 11:36
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We have been notified by our upstream network provider that they will be carrying out firmware upgrades on the core mesh switches in the Kent Sceince Park F25 facility on the 10th of June between 00:01 and 02:01.

    This is essential maintenance recommended by the switch manufacturer in order to rectify a bug identified as the root cause of the switch mesh instability in March.

    During this maintenance window, each of the core switches will be upgraded and rebooted in turn. This will lead to brief periods of packet loss or network instability whilst the mesh re-establiashes itself on the remaining switches before reintroducing the upgraded siwtch.
    It is expected that there will be approximately 10 minutes service interuption in total during this maintenance window.

    All services in our primary Maidenhead PoP will be unaffected. Customers hosted in the KSP F25 facility will be contacted individually. Please accept our apologies for any inconvinience caused and feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: The firmware upgrades were successful and the the mesh automatically rebuilt around each switch as it was rebooted. The network has been stable since the upgrades were completed.

  • Date - 10/06/2010 10:00 - 10/06/2010 10:20
  • Last Updated - 10/06/2010 07:20
Network instability in KSP F25 (Resolved)
  • Priority - High
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We are experiencing some brief periods of network instability at the Kent Science Park F25 facility where Freethought Internet house our off-site backup and disaster recovery servers.

    Upstream network engineers are investigating the issue with third party circuit and equipment providers.

    All services in our primary Maidenhead PoP are unaffected. Customers hosted in KSP F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: We are still experiencing some very brief periods of packet loss, however this is now down under 1% and the upstream engineers are still investigating.

    Update: We received notification from our upstream network supplier at 18:20 that the issue had been identified and rectified. The root issue was a failing 10Gbps fibre optic transceiver, which was quickly disabled and removed. A replacement module was installed at 18:46 returning the network to full redundancy.

  • Date - 01/06/2010 18:05 - 01/06/2010 18:46
  • Last Updated - 02/06/2010 12:11
Loss of UPS redundancy in BlueSquare House (Resolved)
  • Priority - Medium
  • Affecting System - All Maidenhead services
  • We have received the following notification from BlueSquare with regards to equipment located in BlueSquare House, Maidenhead:

    Please be aware that we have temporarily turned off one UPS module in the UPS system serving BlueSquare House. This means a loss of redundancy in the system, but full UPS protection is still in place.

    This unit has been turned off on the advice of our UPS manufacture after a cooling fan failed in this unit at approx 8.30am this morning. An engineer will be visiting later today to fit a new fan and the unit will then be re-introduced to the system restoring redundancy.

    This is not service affecting, however services housed in BlueSquare House, Maidenhead should now be considered "at risk" whilst there is no redundancy on the UPS equipment. We will provide an update as soon as we have any further information. In the mean time, please feel free to contact our support staff in the usual manner should you have any questions.

    Update: We have been given the all clear by BlueSquare. The failed cooing fan has been replaced and the UPS module returned to the running system, this restoring full redundancy.

  • Date - 24/05/2010 10:51 - 24/05/2010 16:24
  • Last Updated - 24/05/2010 16:26
Firmware update (Resolved)
  • Priority - Medium
  • Affecting System - Maidenhead border routers
  • We will be conducting a routine upgrade of the software on our border routers in BlueSquare House, Maidenhead on 15/95/2010 between 21:00 and 21:30. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be upgrading the software on each device individually and testing it before returning it to the pool of availble devices. There will be approximately 15 seconds of interruption to connectivity when removing and re-inserting each device.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

  • Date - 15/05/2010 21:00 - 15/05/2010 21:30
  • Last Updated - 16/05/2010 00:05
Software update (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a software update on Enigma to upgrade the Parallels Plesk control panel from 9.5.1 to 9.5.2 on the 11th of May. This will start at 22:00 and is expected to take around 20-30 minutes, during which time services hosted on Enigma may be intermitantly unavailable.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: All software updates have been completed. Total outage time for web-sites was about 7 minutes.

  • Date - 11/05/2010 22:00 - 11/05/2010 23:00
  • Last Updated - 11/05/2010 23:04
Software update (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We will be performing a software update on Enigma to upgrade the Parallels Plesk control panel from 9.3 to 9.5 on the 26th of April. This will start at 22:00 and is expected to take around 20-30 minutes, during which time services hosted on Enigma may be intermitantly unavailable.

    Please feel free to contact our support staff via the usual means if you have any questions or concerns.

    Update: This update has been completely sucessfully

  • Date - 26/04/2010 22:00 - 26/04/2010 23:00
  • Last Updated - 26/04/2010 23:53
Loss of network connectivity (Resolved)
  • Priority - Critical
  • Affecting System - Maidenhead network
  • We have experienced a complete loss of network connectivity in Maidenhead. We are currently investigating this with our upstream transit suppliers.

    Update: Connectivity has now been restored and all services should be available once again. Please contact our support staff in the usual manner if you are still experiencing any problems.

    We would like to reassure clients that we run a redundant network and that it would not normally be possible for a problem with one transit feed such as in this case to disrupt our network in such a manner, however due to the migration between racks undertaken yesterday (the 25th) we were only running on one transit connection and waiting for the other connection to be brought online in the morning.

    Troubleshooting and resolving the issue was compounded by the failure of a KVMoIP device that permits out of band management access to the network. Once this was replaced work was immediately undertaken to restore connectivity and bring all services back up.

  • Date - 26/04/2010 00:56 - 26/04/2010 02:45
  • Last Updated - 26/04/2010 02:56
Maidenhead rack move (Resolved)
  • Priority - High
  • Affecting System - Maidenhead servers
  • On the 13th of March we will be moving some servers between racks in BlueSquare House, Maidenhead. We will contact clients individually to let you know if you are affected by this. Downtime for individual servers should be short whilst we power down and physically move your server. Please accept our appologies for any inconvenience that this may cause and do not hesitate to contact our support staff in the usual manner should you have any questions.

    Update: This work has been provisionally rescheduled for the 24th of April. We will contact clients indivudally and update this notice once we have confirmed this maintenance slot.

    Update: Our technicians are on site and have started preparation work. This has been slightly delayed due to problems accessing the facility, so we expect to start powering down servers around 13:00-13:30.

    Update: We are starting to power down and move servers now.

    Update: No servers have been moved yet as we are having some network issues in the destination rack. Once we have resolved these, we will start moving the affected servers over.

    Update: We are still working on resolving the issues in the destination rack. No servers have been moved yet so there should not have been any down time or service impact for any customers. We will update you as soon as we have more information.

    Update: We have been unable to resolve the networking issues in the destination rack as they lie with a third party supplier. As such, we are cancelling the maintenance for today and will attempt to migrate the servers again tomorrow. Further details will follow once we have more information on the plans for tomorrow.

    Update: We have cancelled the work scheduled for today due to on-going issues in the destination rack . This work will be re-scheduled for a later date once all the outstanding issues have been resolved.

    Update: The destination rack issues have been resolved. Our engineers are returning to the data centre to complete the work originally scheduled for this weekend.

    Update: Our engineers are on site and are preparing to commence the work.

    Update: Most of the affected servers have been powered down and are being moved now. Once these are complete we will move the remaining servers. There may be some intermittent network issues during this time.

    Update: We have been delayed in restoring power to the servers due to problems with the new IEC power cables. Alternative cables have been sourced and are being installed now.

    Update: All servers are powered up and network connectivity is being restored one server at a time.

    Update: The majority of services should be restored now, we are still working on our VPS service though. We should be able to restore this fully shortly.

    Update: All services have been restored and everything is working normally. Mission accomplished.

  • Date - 24/04/2010 10:00 - 25/04/2010 18:00
  • Last Updated - 25/04/2010 20:43
Firmware update (Resolved)
  • Priority - Low
  • Affecting System - Maidenhead hardware firewall cluster
  • We will be performing a firmware update on our Maidenhead hardware firewall cluster on 26/03/2010. This will effect shared hosting customers as well as dedicated server and co-location customers that have a managed firewall service hosted on the shared hardware firewall cluster. Dedicated server and co-location customers that do not have a managed firewall service will not be effected by this maintenance.

    Each of the nodes in the cluster will be gracefully removed from the cluster and updated individually before being returned to the cluster once they have rebooted, so we do not anticipate any down time as a result of this maintenance.

    If you have any questions about this maintenance, please raise a support ticket in the usual manner.

  • Date - 26/03/2010 22:00 - 26/03/2010 22:30
  • Last Updated - 30/03/2010 02:08
Network outage in Maidenhead (Resolved)
  • Priority - Critical
  • Affecting System - Maidenhead connectivity
  • We are currently experiencing a network outage in Maidenhead. We believe this may be as a result of an outage at the London Internet Exchange (LINX). Technicians are investigating and will post an update shortly.

    Update: We have re-routed all traffic via another network provider and access should now be restored. Please accept our apologies for any inconvinience caused.

  • Date - 17/03/2010 16:43 - 00/00/0000
  • Last Updated - 17/03/2010 17:27
Network instability in KSP F25 (Resolved)
  • Priority - Critical
  • Affecting System - All off-site/backup services hosted at KSP F25
  • We have experienced some brief periods of network instability at the Kent Science Park F25 facility where Freethought Internet house our off-site backup and disaster recovery servers as well the client portal, company web-site and internal e-mail.

    Upstream network engineers are investigating the issue with third party circuit and equipment providers.

    All services in our primary Maidenhead PoP are unaffected. Customers hosted in KSP F25 will be contacted individually. Please accept our apologies for any inconvinience caused.

    Update: Our upstream network provider and their equipment supplier (HP) have diagnosed a potential firmware issue relating to rebuilding the mesh after a circuit failure in the equipment used for layer 3 fibre termination and are investigating mitigating this by rolling back to a previous firmware image on this equipment. In the mean time, the MAC address table has been manually flushed on these devices to force the mesh table to be rebuilt.

    Update: Configuration changes have been made ti the switching mesh and it has now been stable for apoproximately 30 minutes, so the decision has been made not to carry out any firmware downgrades at this time. Upstream network engineers continue to closely monitor the performance and stability of the switching mesh.

  • Date - 16/03/2010 15:26 - 16/03/2010 17:42
  • Last Updated - 16/03/2010 17:46
Router firmware upgrade (Resolved)
  • Priority - Low
  • Affecting System - Maidenhead connectivity
  • We will be conducting a routine upgrade of the software on our border routers in BlueSquare House, Maidenhead on 05/03/2010 between 22:00 and 22:30. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be upgrading the software on each device individually and testing it before returning it to the pool of availble devices. There will be approximately 10-15 seconds of interruption to connectivity when removing and re-inserting each device.

  • Date - 05/03/2010 22:00 - 05/03/2010 22:30
  • Last Updated - 05/03/2010 23:19
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • As we were unable complete the required scheduled maintenance work to change the Linux kernel being used by TMA02/Enigma on 24/12/2009, we have corrected the cause of the problem in that maintenance window and so a further reboot is once again required in order to load the new kernel.

    This will be carried out between 22:00 and 22:30 on 29/01/2010 and should take no longer than 5-10 minutes.

    Update: This has been re-scheduled for 21/02/2010

  • Date - 21/02/2010 22:00 - 21/02/2010 22:30
  • Last Updated - 21/02/2010 23:25
Scheduled electrical maintenance (Resolved)
  • Priority - High
  • Affecting System - All off-site backup/DR services hosted in KSP
  • In light of the power outages suffered at the Kent Science Park F25 facility in the last months of 2009, the facility operators have undertaken a thorough review of the electricity distribution infrastructure. As a result of this review, improvements are to be made in the electrical distribution system feeding suites A and B of the F25 facility on the 21st of February between 00:15 and 04:15.

    In order to provide a safe working environment for the electricians, as well as to prevent any damage to customer equipment; this will require that suites A and B in the F25 facility are powered down for the duration of the work.

    We will contact the affected customers individually, but please note that only off-site backup and disaster recovery services hosted from F25 in Kent Science Park will be affected. Customers hosted in BlueSquare, Maidenhead will be unaffected.

    The Freethought Internet and PowerCore Networks web-sites, e-mail and customer portal all reside on servers in the F25 facility and so will also be affected as part of this maintenance work. Should you require any support or other assistance during this maintenance window, please use the alternative contact methods provided as part of your support details.

    No action will be required for any clients, we will shut your equipment down before the maintenance starts and power it up again once the maintenance work has completed.

    Please accept our apologies for any inconvenience caused as a result of this work. It is essential that this maintenance is completed in order to guarantee the stability of the power supply in the F25 facility.

  • Date - 21/02/2010 00:00 - 21/02/2010 04:30
  • Last Updated - 21/02/2010 08:28
Maidenhead network outage (Resolved)
  • Priority - Critical
  • Affecting System - Maidenhead network
  • We have just experienced a network outage in BlueSquare House, Maidenhead from 22:09 to 22:20. We are still investigating, but it seems that BlueSquare rebooted upstream switching equipment as part of planned maintenance that we were unaware of. This caused us to simultaneously loose access to all upstream networks.

    All service was restored at 22:20. If you are still experiencing any problems then please contact our support staff in the usual manner. Please accept our apologies for any inconvenience caused.

    Update: We have received the following from BlueSquare:

     

    During the scheduled upload of new firmware to our Comms1 Bluesquare House ring switch the switch crashed half way through the download and tried to boot in to the incomplete firmware version. We rolled this back at console to the old version, however an outage will have been experienced between 22:13 GMT to 22:21. Whilst this is the reason we are currently in an at risk period we would like to apologise for this inconvenience.

    We will be attempting the firmware update of this switch again at 00:01 GMT tomorrow (Thursday).

     

  • Date - 27/01/2010 22:09 - 27/01/2010 22:20
  • Last Updated - 27/01/2010 22:46
Re-patching of VPS clients (Resolved)
  • Priority - High
  • We will be conducting scheduled maintenance work on 26/01/2010 from approximately 18:00 onwards affecting the servers housing former No Wires/Crystal Data clients.

    Initially this work will only affect the availability of the web based HyperVM control panel used to administer the VPS, however part of this work involves migrating these servers from the No Wires network and onto the Freethought Internet networks. This will result in all services on these servers being unavailable over the internet for approximately 10-15 minutes.

    No actions are required by affected clients and all IP addresses for clients VPS will remain the same after the migration is complete. The URL used to access the HyperVM control panel will change however. We will contact all clients via e-mail with the new details for HyperVM once the migration is complete.

    We will be posting regular updates here to keep you advised of the progress throughout this work.

    Update: We have moved half of the servers over and have restored connectivity. The other half are being worked on at the moment. HyperVM is still unavailable.

    Update: Service to all clients should now be fully restored. Just completing final checks.

    Update: We have confirmed that service has been restored to all nodes. If you are still experiencing issues then please raised a ticket with our support staff and we will investigate. New HyperVM details will be e-mailed to clients shortly.

  • Date - 26/01/2010 18:00 - 26/01/2010 22:00
  • Last Updated - 26/01/2010 22:10
Migration to new server (Resolved)
  • Priority - High
  • Affecting Server - LDeX1-cPanel1
  • We will be migrating all clients currently hosted on Martini to a new server starting on 02/01/2010.

    This will provide a massive increase in both performance and reliability over the current server.

    During the migration, all services on Martini will be unavailable. We will be posting regular updates here to keep you advised of the progress.

    Update: The migration process has begun. E-mail and web services on Martini have been stopped and we are copying over the first set of accounts now.

    Update: The first batch of accounts have been copied over and service has been restored to these users. The final batch of accounts are copying now.

    Update: Copying is going well, we only have the accounts for one reseller left to go. Some sites may be experiencing problems due to CURL not being available to PHP; we will correct this as soon as the copy process has completed.

    Update: We have now copied over 60% of the accounts for the one remaining reseller. The accounts that have been copied are already online and working.

    Update: All accounts have copied and all services have been restored to all users. If you continue to experience any issues, please contact our support staff in the usual manner.

  • Date - 02/01/2010 08:00 - 03/01/2010 23:00
  • Last Updated - 02/01/2010 15:02
Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • Following the scheduled maintenance work to replace a RAID card in TMA02/Enigma on 22/12/2009, a further reboot is required due to subsequent changes to the Kernel to optimise performance.

    This will be carried out between 19:30 and 20:00 on 24/12/2009 and should take no longer than 5-10 minutes.

    Update: The reboot was unsuccessful and will have to be rescheduled for a later date. We are now running on the same Linux kernel as before. Our technicians are now working to diagnose the cause of the failure of the new kernel. Unfortunately we experienced a longer outage than predicted due to a KVM failure which prevented us from reverting back to the old kernel version until we were able to fix the unit.

  • Date - 24/12/2009 19:30 - 24/12/2009 20:00
  • Last Updated - 24/12/2009 21:20
Hardware upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - LDeX1-Plesk1
  • We have scheduled an upgrade of the RAID card in TMA02/Enigma for Monday the 22nd of December, commencing at 19:30.

    This maintenance work will require us to power down the server and copy all of the data over to the new RAID array. We estimate that this process will take 2-3 hours in total. All services hosted on TMA02/Enigma will be unavailable during this time.

    Once the RAID card has been upgraded we should see greatly improved general disk performance as well as increased reliability in the unlikely event of a hard drive failure.

    Update: This work has been moved to Tuesday the 22nd of December at 19:30.

    Update: TMA02/Enigma has now been shut down and work has begun on installing the new RAID card and migrating all client data. Services hosted on this server are now unavailable.

    Update: The new RAID array is installed and the copying of data is well under way. We are currently estimating that service will be restored around midnight.

    Update: All data has been copied and we are testing the server to ensure that the new RAID card is recognised properly prior to bringing it back into service.

    Update: TMA02/Enigma has been returned to service, however some pages are taking longer than expected to load. We are still working on this and hope to have a further update shortly.

    Update: The performance issues have been tracked down to a recursive DNS resolver which has now been fixed. All services on TMA02/Enigma should now be running as normal. Please contact support if you are continuing to experience any problems.

  • Date - 22/12/2009 19:30 - 22/12/2009 01:00
  • Last Updated - 23/12/2009 02:40
Border router upgrade in Maidenhead (Resolved)
  • Priority - Low
  • Affecting System - All services hosted in Maidenhead BlueSquare House
  • We will be conducting a routine upgrade of the software on our border routers in BlueSquare House, Maidenhead on 12/12/2009 between 21:00 and 21:30. Due to the redundant, failover configuration of these devices we do not expect there to be any noticable impact on our customer connectivity, however we will be running with a reduced level of redundancy.

    We will be upgrading the software on each device individually and testing it before returning it to the pool of availble devices. There will be approximately 15 seconds of interruption to connectivity when removing and re-inserting each device.

  • Date - 12/12/2009 21:00 - 12/12/2009 21:30
  • Last Updated - 12/12/2009 23:44
Scheduled network interruption at KSP F25 (Resolved)
  • Priority - High
  • Affecting System - All off-site/backup services hosted at KSP
  • We have been notified by our network provider at the Kent Science Park F25 facility (where Freethought Internet house our off-site backup and disaster recovery servers as well the client portal, company web-site and internal e-mail) that they are intending to conduct essential network maintenance this evening in order to ensure that network connectivity to Suite B will not be interrupted by a power failure in Suite A. This work will take place between 23:00 and 23:15 on 10/12/2009 and the outage itself should last no more than 10-15 seconds.
    This work was originally planned as part of the 2.5Gbps to 10Gbps fibre network upgrades scheduled to take place in January, however has been brought forward in light of these power outages.

    We would like to apologise for the short notice, however the impact of this work is minimal and it will result in improved reliability for equipment in suite B of the KSP F25 facility.

  • Date - 10/12/2009 23:00 - 10/12/2009 23:15
  • Last Updated - 12/12/2009 12:38
Power outage at Kent Sceince Park (Resolved)
  • Priority - Critical
  • Affecting System - All off-site/backup services hosted at KSP
  • The Kent Science Park facility used to host the Freethought web-site, portal and internal e-mail as well as customers with off-site disaster recovery or backup servers has suffered a site wide power outage due to a faulty UPS.

    All services at this site were unavailable from 12:04 to 14:57, however normal service has now been resumed. If any customers are still experiencing any problems then please contact our support staff in the usual manner.

    We are awaiting a full RFO (Reason For Outage) report from the facility operator and will update this post as soon as we receive this.

    Update (10/12/2009): We hare now received the following RFO:


    The F25 facility at the Kent Science Park where Freethought Internet house our off-site backup and disaster recovery servers as well the client portal, company web-site and internal e-mail suffered a complete power failure to suites A and B starting at 12:04. Facility wide power restoration across the two affected suites began at approximately 14:30 with racks being powered up individually and full service was restored to our equipment at 14:57.

    The initial loss of power was caused by an electrical fault, which also triggered the aspirating smoke detection system and thus the building's fire alarm.
    Engineers where immediately alerted and attended the F25 facility to investigate the fault. It was quickly established that a tap-off unit had failed; multiple breakers had tripped and a distinct electrical smell was present.

    The failed 160A electrical busbar tap-off unit feeds Suite B, whereas the previous tap-off that failed fed Suite A; this tap-off unit was visually checked during the previous failure and no apparent damage was present. Due to the re-occurrence of this issue albeit on a different Suites' feed, the decision was taken to split the load of the facility between the parallel UPS stacks that are present. Suite A is still fed from the replacement busbar tap-off that was installed previously - this has been visually checked again and no visible damage is present. Suite B was removed from the parallel busbar and connected directly to a UPS stack.

    The changes made are a temporary solution; Suite A is fully protected by a UPS Stack however is not running to N+1 standard. Suite B is fully protected by a UPS stack and is running to N+1 standard providing customer load does not increase. Suite C is fully protected by a UPS stack and is running to N+1 standards. Suite C has never suffered a power failure and has always run from its own UPS stack and will not be subject to any of these works. It is believed that equipment in Suite A can be re-assigned to restore N+1 availability and engineers will work on this over the coming days.

    Time will now be spent planning and deploying replacement of the main post UPS busbar system within F25 Suite A and B as these outages are clearly unacceptable and the only conclusion we can reach is that these busbar tap-off systems supplied by the manufacturer maybe substandard as everything was running within the design limits of the busbar system. This will mean scheduled maintenance to the power systems will occur over the coming months however until the design is completed we cannot say when these will be.

    Please accept our sincere apologies for this outage and feel free to contact me if you wish to discuss or understand these issues further. I can assure you that everybody involved is 100% committed to re-engineering/replacing the busbar infrastructure within F25 to ensure such outages cannot recurr. This will be done as quickly as feasibly possible and irrespective of the cost. The intention is to restore total customer confidence in our KSP operation as rapidly as possible.

    I must thank everyone for bearing with us, and, as ever, we will keep customers fully updated regarding the developments in F25.

  • Date - 09/12/2009 12:04 - 09/12/2009 14:57
  • Last Updated - 10/12/2009 17:15