You might have observed that Git repository platform GitLab went offline all of a sudden on Wednesday. This was apparently due to a major database glitch that brought an emergency maintenance.
The GitLab team escalated the emergency database maintenance after it accidentally lost six hours of data, including issues, merge requests, users, comments and snippets. However, git repositories and wikis are reported to remain unaffected.
“As part of our ongoing recovery efforts, we are actively investigating a potential data loss. If confirmed, this data loss would affect less than one percent of our user base and specifically peripheral metadata that was written during a six-hour window,” GitLab acknowledged the issue in a statement.
Result of an erroneous act
GitLab engineers did not reveal any reason behind the issue. But technology news site the Register reports that one its sysadmin had inadvertently deleted a directory that contained over 300GB of live production data. This erroneous act brought data loss of nearly 295GB.
A Google Docs file had been created by the GitLab team to track the recovery process. “Getting replication to work is proving to be problematic and time-consuming,” the online document reads.
GitLab is establishing some long-term measures avoid the outage chances in future. “We have been working around the clock to resume service on the affected product and set up long-term measures to prevent this from happening again,” the company stated.
San Fransisco-based GitLab is aiming to take on GitHub with features such as an app testing service and a project management tool. While the open source platform is encouraging a large number of individual developers to make code contributions to the free community, it also has a paid Enterprise Edition. IBM, NASA and VMWare are already among the GitLab customers.
In last September, GitLab raised $20 million in a Series B round. The company was founded in 2014 and so far has over 100 employees worldwide.