MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

GitLab database goes out after spam attack

GitLab database goes out after spam attack

Code-hosting site GitLab has suffered an outage after sustaining a “serious” incident on Tuesday with one of its databases that has required emergency maintenance.

The company today said it lost six hours of database data, including issues, merge requests, users, comments, and snippets, for GitLab.com and was in the process restoring data from a backup. Data was accidentally deleted, according to a Twitter message.

“Losing production data is unacceptable, and in a few days we’ll post the five whys of why this happened and a list of measures we will implement,” GitLab said in a bulletin this morning. Git.wiki repositories and self-hosted installations were unaffected.

The restoration means any data between 17:20 UTC and 23:25 UTC from the database is lost by the time GitLab.com goes live again. Providing a chronology of events, GitLab said it detected Monday that spammers were hammering its database by creating snippets and rendering it unstable. GitLab blocked the spammers based on an IP address and removed a user from using a repository as a form of CDN. This resulted in 47,000 IPs signing in using the same account and causing a high database load, and GitLab removed the users for spamming.

The company provided a statement this morning: “This outage did not affect our Enterprise customers or the wide majority of our users. As part of our ongoing recovery efforts, we are actively investigating a potential data loss. If confirmed, this data loss would affect less than one percent of our user base, and specifically peripheral metadata that was written during a six-hour window,” the company said. “We have been working around the clock to resume service on the affected product, and set up long-term measures to prevent this from happening again. We will continue to keep our community updated through Twitter, our blog and other channels.”

While dealing with the problem, GitLab found database replication lagged far behind, effectively stopping. “This happened because there was a spike in writes that were not processed on time by the secondary database.” GitLab has been dealing with a series of database issues, including a refusal to replicate.

GitLab.com went down at 6:28 pm PST on Tuesday and was back up at 9:57 am PST today, said Tim Anglade, interim vice president of marketing at the company.

 

 

[Source:- Javaworld]