MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Snowflake now offers data warehousing to the masses

Snowflake now offers data warehousing to the masses

Snowflake, the cloud-based data warehouse solution co-founded by Microsoft alumnus Bob Muglia, is lowering storage prices and adding a self-service option, meaning prospective customers can open an account with nothing more than a credit card.

These changes also raise an intriguing question: How long can a service like Snowflake expect to reside on Amazon, which itself offers services that are more or less in direct competition — and where the raw cost of storage undercuts Snowflake’s own pricing for same?

Open to the public

The self-service option, called Snowflake On Demand, is a change from Snowflake’s original sales model. Rather than calling a sales representative to set up an account, Snowflake users can now provision services themselves with no more effort than would be needed to spin up an AWS EC2 instance.

In a phone interview, Muglia discussed how the reason for only just now transitioning to this model was more technical than anything else. Before self-service could be offered, Snowflake had to put protections into place to ensure that both the service itself and its customers could be protected from everything from malice (denial-of-service attacks) to incompetence (honest customers submitting massively malformed queries).

“We wanted to make sure we had appropriately protected the system,” Muglia said, “before we opened it up to anyone, anywhere.”

This effort was further complicated by Snowflake’s relative lack of hard usage limits, which Muglia characterized as being one of its major standout features. “There is no limit to the number of tables you can create,” Muglia said, but he further pointed out that Snowflake has to strike a balance between what it can offer any one customer and protecting the integrity of the service as a whole.

“We get some crazy SQL queries coming in our direction,” Muglia said, “and regardless of what comes in, we need to continue to perform appropriately for that customer as well as other customers. We see SQL queries that are a megabyte in size — the query statements [themselves] are a megabyte in size.” (Many such queries are poorly formed, auto-generated SQL, Muglia claimed.)

Fewer costs, more competition

The other major change is a reduction in storage pricing for the service — $30/TB/month for capacity storage, $50/TB/month for on-demand storage, and uncompressed storage at $10/TB/month.

It’s enough of a reduction in price that Snowflake will be unable to rely on storage costs as a revenue source, since those prices barely pay for the use of Amazon’s services as a storage provider. But Muglia is confident Snowflake is profitable enough overall that such a move won’t impact the company’s bottom line.

“We did the data modeling on this,” said Muglia, “and our margins were always lower on storage than on compute running queries.”

According to the studies Snowflake performed, “when customers put more data into Snowflake, they run more queries…. In almost every scenario you can imagine, they were very much revenue-positive and gross-margin neutral, because people run more queries.”

The long-term implications for Snowflake continuing to reside on Amazon aren’t clear yet, especially since Amazon might well be able to undercut Snowflake by directly offering competitive services.

Muglia, though, is confident that Snowflake’s offering is singular enough to stave off competition for a good long time, and is ready to change things up if need be. “We always look into the possibility of moving to other cloud infrastructures,” Muglia said, “although we don’t have plans to do it right now.”

He also noted that Snowflake competes with Amazon and Redshift right now, but “we have a very different shape of product relative to Redshift…. Snowflake is storing multiple petabytes of data and is able to run hundreds of simultaneous concurrent queries. Redshift can’t do that; no other product can do that. It’s that differentiation that allows to effective compete with Amazon, and for that matter Google and Microsoft and Oracle and Teradata.”

 

 

[Source:- IW]

Azure brings SQL Server Analysis Services to the cloud

Azure brings SQL Server Analysis Services to the cloud

SQL Server Analysis Services, one of the key features of Microsoft’s relational database enterprise offering, is going to the cloud. The company announced Tuesday that it’s launching the public beta of Azure Analysis Services, which gives users cloud-based access to semantic data modeling tools.

The news is part of a host of announcements the company is making at the Professional Association for SQL Server Summit in Seattle this week. On top of the new cloud service, Microsoft also released new tools for migrating to the latest version of SQL Server and an expanded free trial for Azure SQL Data Warehouse. On the hardware side, the company revealed new reference architecture for using SQL Server 2016 with active data sets of up to 145TB.

The actions are all part of Microsoft’s continued investment in the company’s relational database product at a time when it’s trying to get customers to move to its cloud.

Azure Analysis Services is designed to help companies get the benefits of cloud processing for semantic data modeling, while still being able to glean insights from data that’s stored either on-premises or in the public cloud. It’s compatible with databases like SQL Server, Azure SQL Database, Azure SQL Data Warehouse, Oracle and Teradata. Customers that already use SQL Server Analysis Services in their private data centers can take the models from that deployment and move them to Azure, too.

One of the key benefits to using Azure Analysis Services is that it’s a fully managed service. Microsoft deals with the work of figuring out the compute resources underpinning the functionality, and users can just focus on the data.

Like its on-premises predecessor, Azure Analysis Services integrates with Microsoft’s Power BI data visualization tools, providing additional modeling capabilities that go beyond what that service can offer. Azure AS can also connect to other business intelligence software, like Tableau.

Microsoft also is making it easier to migrate from an older version of its database software to SQL Server 2016.  To help companies evaluate the difference between their old version of SQL Server and the latest release, Microsoft has launched the  Database Experimentation Assistant.

Customers can use the assistant to run experiments across different versions of the software, so they can see what if any benefits they’ll get out of the upgrade process while also helping to reduce risk. The Data Migration Assistant, which is supposed to help move workloads, is also being upgraded.

For companies that have large amounts of data they want to store in a cloud database, Microsoft is offering an expanded free trial of Azure SQL Data Warehouse. Users can sign up starting on Tuesday, and get a free month of use. Those customers who want to give it a shot will have to move quickly, though: Microsoft is only taking trial sign-ups until December 31.

Microsoft Corporate Vice President Joseph Sirosh said in an interview that the change to the Azure SQL Data Warehouse trial was necessary because setting up the system to work with actual data warehouse workloads would blow through the typical Azure free trial. Giving people additional capacity to work with should let them have more of an opportunity to test the service before committing to a large deployment.

All of this SQL news comes a little more than a month before AWS Re:Invent, Amazon’s big cloud conference in Las Vegas. It’s likely that we’ll see Amazon unveil some new database products at that event, continuing the ongoing cycle of competition among database vendors in the cloud.

 

 

[Source:- IW]

Fire up big data processing with Apache Ignite

Fire up big data processing with Apache Ignite

Apache Ignite is an in-memory computing platform that can be inserted seamlessly between a user’s application layer and data layer. Apache Ignite loads data from the existing disk-based storage layer into RAM, improving performance by as much as six orders of magnitude (1 million-fold).

The in-memory data capacity can be easily scaled to handle petabytes of data simply by adding more nodes to the cluster. Further, both ACID transactions and SQL queries are supported. Ignite delivers performance, scale, and comprehensive capabilities far above and beyond what traditional in-memory databases, in-memory data grids, and other in-memory-based point solutions can offer by themselves.

Apache Ignite does not require users to rip and replace their existing databases. It works with RDBMS, NoSQL, and Hadoop data stores. Apache Ignite enables high-performance transactions, real-time streaming, and fast analytics in a single, comprehensive data access and processing layer. It uses a distributed, massively parallel architecture on affordable, commodity hardware to power existing or new applications. Apache Ignite can run on premises, on cloud platforms such as AWS and Microsoft Azure, or in a hybrid environment.

apache ignite architecture

The Apache Ignite unified API supports SQL, C++, .Net, Java, Scala, Groovy, PHP, and Node.js. The unified API connects cloud-scale applications with multiple data stores containing structured, semistructured, and unstructured data. It offers a high-performance data environment that allows companies to process full ACID transactions and generate valuable insights from real-time, interactive, and batch queries.

Users can keep their existing RDBMS in place and deploy Apache Ignite as a layer between it and the application layer. Apache Ignite automatically integrates with Oracle, MySQL, Postgres, DB2, Microsoft SQL Server, and other RDBMSes. The system automatically generates the application domain model based on the schema definition of the underlying database, then loads the data. In-memory databases typically provide only a SQL interface, whereas Ignite supports a wider group of access and processing paradigms in addition to ANSI SQL. Apache Ignite supports key/value stores, SQL access, MapReduce, HPC/MPP processing, streaming/CEP processing, clustering, and Hadoop acceleration in a single integrated in-memory computing platform.

GridGain Systems donated the original code for Apache Ignite to the Apache Software Foundation in the second half of 2014. Apache Ignite was rapidly promoted from an incubating project to a top-level Apache project in 2015. In the second quarter of 2016, Apache Ignite was downloaded nearly 200,000 times. It is used by organizations around the world.

Architecture

Apache Ignite is JVM-based distributed middleware based on a homogeneous cluster topology implementation that does not require separate server and client nodes. All nodes in an Ignite cluster are equal, and they can play any logical role per runtime application requirement.

A service provider interface (SPI) design is at the core of Apache Ignite. The SPI-based design makes every internal component of Ignite fully customizable and pluggable. This enables tremendous configurability of the system, with adaptability to any existing or future server infrastructure.

Apache Ignite also provides direct support for parallelization of distributed computations based on fork-join, MapReduce, or MPP-style processing. Ignite uses distributed parallel computations extensively, and they are fully exposed at the API level for user-defined functionality.

Key features

In-memory data grid. Apache Ignite includes an in-memory data grid that handles distributed in-memory data management, including ACID transactions, failover, advanced load balancing, and extensive SQL support. The Ignite data grid is a distributed, object-based, ACID transactional, in-memory key-value store. In contrast to traditional database management systems, which utilize disk as their primary storage mechanism, Ignite stores data in memory. By utilizing memory rather than disk, Apache Ignite is up to 1 million times faster than traditional databases.

apache ignite data grid

SQL support. Apache Ignite supports free-form ANSI SQL-99 compliant queries with virtually no limitations. Ignite can use any SQL function, aggregation, or grouping, and it supports distributed, noncolocated SQL joins and cross-cache joins. Ignite also supports the concept of field queries to help minimize network and serialization overhead.

In-memory compute grid. Apache Ignite includes a compute grid that enables parallel, in-memory processing of CPU-intensive or other resource-intensive tasks such as traditional HPC, MPP, fork-join, and MapReduce processing. Support is also provided for standard Java ExecutorService asynchronous processing.

apache ignite compute grid

In-memory service grid. The Apache Ignite service grid provides complete control over services deployed on the cluster. Users can control how many service instances should be deployed on each cluster node, ensuring proper deployment and fault tolerance. The service grid guarantees continuous availability of all deployed services in case of node failures. It also supports automatic deployment of multiple instances of a service, of a service as a singleton, and of services on node startup.

In-memory streaming. In-memory stream processing addresses a large family of applications for which traditional processing methods and disk-based storage, such as disk-based databases or file systems, are inadequate. These applications are extending the limits of traditional data processing infrastructures.

apache ignite streaming

Streaming support allows users to query rolling windows of incoming data. This enables users to answer questions such as “What are the 10 most popular products over the last hour?” or “What is the average price in a certain product category for the past 12 hours?”

Another common stream processing use case is pipelining a distributed events workflow. As events are coming into the system at high rates, the processing of events is split into multiple stages, each of which has to be properly routed within a cluster for processing. These customizable event workflows support complex event processing (CEP) applications.

In-memory Hadoop acceleration. The Apache Ignite Accelerator for Hadoop enables fast data processing in existing Hadoop environments via the tools and technology an organization is already using.

apache ignite hadoop rev

Ignite in-memory Hadoop acceleration is based on the first dual-mode, high-performance in-memory file system that is 100 percent compatible with Hadoop HDFS and an in-memory optimized MapReduce implementation. Delivering up to 100 times faster performance, in-memory HDFS and in-memory MapReduce provide easy-to-use extensions to disk-based HDFS and traditional MapReduce. This plug-and-play feature requires minimal to no integration. It works with any open source or commercial version of Hadoop 1.x or Hadoop 2.x, including Cloudera, Hortonworks, MapR, Apache, Intel, and AWS. The result is up to 100-fold faster performance for MapReduce and Hive jobs.

Distributed in-memory file system. A unique feature of Apache Ignite is the Ignite File System (IGFS), which is a file system interface to in-memory data. IGFS delivers similar functionality to Hadoop HDFS. It includes the ability to create a fully functional file system in memory. IGFS is at the core of the Apache Ignite In-Memory Accelerator for Hadoop.

The data from each file is split on separate data blocks and stored in cache. Data in each file can be accessed with a standard Java streaming API. For each part of the file, a developer can calculate an affinity and process the file’s content on corresponding nodes to avoid unnecessary networking.

Unified API. The Apache Ignite unified API supports a wide variety of common protocols for the application layer to access data. Supported protocols include SQL, Java, C++, .Net, PHP, MapReduce, Scala, Groovy, and Node.js. Ignite supports several protocols for client connectivity to Ignite clusters, including Ignite Native Clients, REST/HTTP, SSL/TLS, and Memcached.SQL.

Advanced clustering. Apache Ignite provides one of the most sophisticated clustering technologies on JVMs. Ignite nodes can automatically discover each other, which helps scale the cluster when needed without having to restart the entire cluster. Developers can also take advantage of Ignite’s hybrid cloud support, which allows users to establish connections between private clouds and public clouds such as AWS or Microsoft Azure.

Additional features. Apache Ignite provides high-performance, clusterwide messaging functionality. It allows users to exchange data via publish-subscribe and direct point-to-point communication models.

The distributed events functionality in Ignite allows applications to receive notifications about cache events occurring in a distributed grid environment. Developers can use this functionality to be notified about the execution of remote tasks or any cache data changes within the cluster. Event notifications can be grouped and sent in batches and at timely intervals. Batching notifications help attain high cache performance and low latency.

Ignite allows for most of the data structures from the java.util.concurrent framework to be used in a distributed fashion. For example, you could add to a double-ended queue (java.util.concurrent.BlockingDeque) on one node and poll it from another node. Or you could have a distributed primary key generator, which would guarantee uniqueness on all nodes.

Ignite distributed data structures include support for these standard Java APIs: Concurrent map, distributed queues and sets, AtomicLong, AtomicSequence, AtomicReference, and CountDownLatch.

Key integrations

Apache Spark. Apache Spark is a fast, general-purpose engine for large-scale data processing. Ignite and Spark are complementary in-memory computing solutions. They can be used together in many instances to achieve superior performance and functionality.

Apache Spark and Apache Ignite address somewhat different use cases and rarely compete for the same task. The table below outlines some of the key differences.

Apache Spark doesn’t provide shared storage, so data from HDFS or other disk storage must be loaded into Spark for processing. State can be passed from Spark job to job only by saving the processed data back into external storage. Ignite can share Spark state directly in memory, without storing the state to disk.

One of the main integrations for Ignite and Spark is the Apache Ignite Shared RDD API. Ignite RDDs are essentially wrappers around Ignite caches that can be deployed directly inside of executing Spark jobs. Ignite RDDs can also be used with the cache-aside pattern, where Ignite clusters are deployed separately from Spark, but still in-memory. The data is still accessed using Spark RDD APIs.

Spark supports a fairly rich SQL syntax, but it doesn’t support data indexing, so it must do full scans all the time. Spark queries may take minutes even on moderately small data sets. Ignite supports SQL indexes, resulting in much faster queries, so using Spark with Ignite can accelerate Spark SQL more than 1,000-fold. The result set returned by Ignite Shared RDDs also conforms to the Spark Dataframe API, so it can be further analyzed using standard Spark dataframes. Both Spark and Ignite natively integrate with Apache YARN and Apache Mesos, so it’s easier to use them together.

When working with files instead of RDDs, it’s still possible to share state between Spark jobs and applications using the Ignite In-Memory File System (IGFS). IGFS implements the Hadoop FileSystem API and can be deployed as a native Hadoop file system, exactly like HDFS. Ignite plugs in natively to any Hadoop or Spark environment. IGFS can be used with zero code changes in plug-and-play fashion.

Apache Cassandra. Apache Cassandra can serve as a high-performance solution for structured queries. But the data in Cassandra should be modeled such that each predefined query results in one row retrieval. Thus, you must know what queries will be required before modeling the data.

 

 

[Source:- Infoworld]

Microsoft: SQL Server for Linux is the real deal

Microsoft: SQL Server for Linux is the real deal

Those who wondered what it would be like to run Microsoft SQL Server on Linux now have an answer. Microsoft has released the first public preview of the long-promised product.

Microsoft also wants to make clear this isn’t a “SQL Server Lite” for those satisfied with a reduced feature set. Microsoft has a four-point plan to make this happen.

First is through broad support for all major enterprise-grade Linux editions: Red Hat Enterprise Linux, Ubuntu Linux, and soon Suse Linux Enterprise Server. “Support” means behaving like other Linux applications on the distributions, not requiring a Microsoft-only methodology for installing or running the app. An introductory video depicts SQL Server installed on RHEL through the system’s yum package manager, and a white paper describes launching SQL Server’s services via systemd.

Second, Microsoft promises the full set of SQL Server 2016’s features for Linux users—not only support for the T-SQL command set, but high-end items like in-memory OLTP, always-on encryption, and row-level security. It will be a first-class citizen on Linux, as SQL Server has been on Windows itself.

Third is Linux support for the tooling around SQL Server—not SQL Server Management Studio alone, but also the Migration Assistant for relocating workloads to Linux systems and the sqlps PowerShell module. This last item is in line with a possibility introduced when PowerShell was initially open-sourced: Once ported to Linux, it would become part of the support structure for other big-name Microsoft applications as they, too, showed up on the OS. That’s now happening.

By bringing SQL Server to Linux, Microsoft can compete more directly with Oracle, which has long provided its product on Linux. Oracle may be blunting the effects of the strategy by shifting customers toward a cloud-based service model, but any gains are likely to be hard-won.

The other, immediate benefit is to provide Microsoft customers with more places to run SQL Server. Enterprises have historically run mixes of Linux and Windows systems, and SQL Server on Linux will let them shave the costs of running some infrastructure.

Most of all, Microsoft is striving to prove a Microsoft shop can lose little, and preferably nothing, by making a switch—and a new shop eyeing SQL Server has fewer reasons to opt for a competing database that’s Linux-first.

 

 

 

[Source:- Infoworld]

CrateDB packs NoSQL flexibility, SQL familiarity

CrateDB packs NoSQL flexibility, SQL familiarity

CrateDB, an open source, clustered database designed for missions like fast text search and analytics, released its first full 1.0 version last week after three years in development.

It’s built upon several existing open source technologies — Elasticsearch and Lucene, for instance — but no direct knowledge of them is needed to deploy it, as CrateDB offers more than a repackaging of those products.

The database caught the attention of InfoWorld’s Peter Wayner back in 2015 because it promised “a search engine like [Apache] Lucene [and ‘its larger, scalable, and distributed cousin Elasticsearch’], but with the structure and querying ease of SQL.”

The idea is to provide more than a full-text search system. CrateDB’s use cases include big data analytics and scalable aggregations across large data sets. It allows querying via standard ANSI SQL, but it uses a distributed, horizontally scalable architecture, so that any number of nodes can be spun up and run side by side with minimal work.

CrateDB gets two major advantages from the NoSQL side. One is support for unstructured data via JSON documents and BLOB storage, with JSON data queryable through SQL as well. Another is support for high-speed writing, to make the database a suitable target for high-speed data ingestion a la Hadoop.

But CrateDB’s biggest draw may be the setup process and the overall level of get-in-and-go usability. The only prerequisite is Java 8, or you can use Docker to run a provided container image. Nodes automatically discover each other as long as they’re on a network that supports multicast. The web UI can bootstrap a cluster with sample data (courtesy of Twitter), and the command-line shell uses conventional SQL syntax for inserting and querying data. Also included is support for PostgreSQL’s wire protocol, although any actual SQL commands sent through it need to adhere to CrateDB’s implementation of SQL.

CrateDB’s one of a flood of recent database products that all address specific issues that have sprung up: scalability, resiliency, mixing modalities (NoSQL vs. SQL, document vs. graph), high-speed writes, and so on. The philosophy behind such products generally runs like this: Existing solutions are too old, hidebound, or legacy-oriented to solve current and future problems, so we need a clean slate. The trick will be to see whether the benefits of the clean slate outweigh the difficulties of moving to it — hence, CrateDB’s emphasis on usability and quick starts.

 

 

[Source:- Infoworld]

An easier way to set up SQL Server on an Azure virtual machine

Image result for An easier way to set up SQL Server on an Azure virtual machine

setup procedure will allow users to configure SQL Server on Microsoft Azure without the aid of a database administrator.

“The new wizard for building and configuring a new virtual machine with SQL Server 2014 is very well put together,” said Denny Cherry, founder and principal consultant for Denny Cherry and Associates Consulting. “It helps solve a lot of the complexity of building a new SQL Server, specifically around how you need to configure the storage in order to get a high-performing SQL Server VM.”

Joseph D’Antoni, principal consultant at Denny Cherry and Associates Consulting, said that one of the major challenges with Azure was allocating storage. For instance, he said, to configure SQL Server on an Azure VM, you needed to allocate disks manually to get the needed amount of IOPS. This meant you had to know exactly what your storage application needs were for optimal performance, and many people were “kind of guessing,” D’Antoni said. With the new wizard, all you have to do is enter the required number of IOPS and storage is allocated automatically.

Automating SQL Server setup for an Azure VM means that no longer does everything have to be done manually: Connectivity, performance, security and storage are configured automatically during setup. “I think it does simplify what was a pretty complex process,” D’Antoni said.

You can now use the Internet to set up SQL Server connectivity and enable SQL Server authentication through the Azure Web portal. Previously, connecting SQL Server to an Azure VM via the Internet required a multistep process using SQL Server Management Studio. The new automated configuration process lets you pick whether to expand connectivity to the whole Azure virtual network or to connect only within the individual VM.

The new process for configuring SQL Server on an Azure virtual machine also includes automated patching and automated backup. The automated patching allows you pick a time when you want all your patches to occur. Users can schedule patches to minimize the impact they’ll have on the workload.  Automated backup allows you to specify how long to keep backups.

“I think that these are a great enhancement on the old process of having to know how to configure these components manually within the VM,” Cherry said, “because these configurations can get tricky to configure.”

D’Antoni added that this innovation is going to affect smaller organizations the most, because it means that they won’t need an expert to move SQL Server onto an Azure virtual machine. “[The simplified configuration] gives the power to someone who is deploying a VM when they would have needed an administrator or a DBA before. To that extent, it’s kind of a big deal.”

 
[Source:- searchsqlserver]

 

Best practices for upgrading after SQL Server 2005 end of life

Image result for Best practices for upgrading after SQL Server 2005 end of life

2005 customers to upgrade, which begs the question: Which version should they upgrade to?

Joseph D’Antoni, principal consultant at Denny Cherry & Associates Consulting, recommended upgrading to the latest SQL Server version that supports the third-party applications a company is running. He said there are big changes in between each of the versions of SQL Server, adding that SQL Server 2014 is particularly notable for the addition of the cardinality estimator. According to D’Antoni, the cardinality estimator can “somewhat drastically” change query performance for some types of data. However, the testing process is the same for all of the versions, and the same challenges — testing, time and licensing — confront any upgrade. “You’re going to have a long testing process anyway. You might as well try to get the latest version, with the longest amount of support.”

“If it were me, right now, contending with 2005, I would go to 2014,” said Robert Sheldon, a freelance writer and technology consultant. “It’s solid, with lots of good features. There would be no reason to go with 2012, unless there were some specific licensing circumstances that were a factor.” Denny Cherry, founder and principal consultant at Denny Cherry & Associates Consulting, recommended upgrading to SQL Server 2012 at the earliest, if not 2014, because “at least they won’t have to worry about upgrading again anytime soon.”

Although SQL Server 2014 is the most current SQL Server version, SQL Server 2016 is in community technology preview. Sheldon said he doesn’t see upgrading to SQL Server 2016 as a good strategy. “Those who want to upgrade to SQL Server 2016 face a bit of a dilemma, because it is still in preview, and I have not yet heard of a concrete release date,” he said. “An organization could use CTP 3.0 to plan its upgrade strategy, but I doubt that is something I would choose to do.”

D’Antoni considered the possibility of waiting until the release of SQL Server 2016 to upgrade. “If they identify a feature that’s compelling, maybe they should wait for 2016,” he said. He added that “2016 is mature enough to roll,” and the only real problem is that it is currently unlicensed.

“If they’re already out of support and planning on moving to 2016, it could be worth waiting the few months,” Cherry said. Furthermore, Cherry said, waiting for SQL Server 2016 could save an organization from having to go through a second upgrade in the future.

Cherry added that, for everyone not waiting for SQL Server 2016, “If they haven’t started the project yet, they should get that project started quickly.” D’Antoni had an even more advanced timetable. He said a company “probably should have started already.” He added, “It’s the testing process that takes a lot of time. The upgrade process is fairly straightforward. Testing the application to make sure it works should have started fairly early.” Ideally, D’Antoni said, by this point, organizations should have done some initial application testing and be planning their migration.

A number of Cherry’s clients, ranging from small businesses to large enterprises, are upgrading because of the approaching SQL Server 2005 end of life. He described SQL Server 2005 end of life as affecting “every size, every vertical.” D’Antoni predicted the small organizations and the largest enterprises will be the hardest hit. The small corporations, he said, are likely to be using SQL Server 2005, because they lack the resources and IT personnel for an easy upgrade. The large enterprises, on the other hand, have so many systems that upgrades become difficult.

D’Antoni explained that, while it is possible to migrate to an Azure SQL database in the cloud instead of upgrading to a more advanced on-premises version of SQL Server, he doesn’t expect to see much of that — not because of difficulties with the product, but because of company culture. Companies who use the cloud, he said, are “more forward-thinking. If you’re still running 2005, you tend to be less on top of things like that.”

 

 
[Source:- searchsqlserver]

Why SQL Server 2005 end of life is good news for DBAs

The end of support for a product as wide-reaching as SQL Server can be a stressful time for the database administrators whose job it is to perform upgrades. However, two database experts see SQL Server 2005 end of life on April 12 as a blessing in disguise.

Bala Narasimhan, vice president of products at PernixData, and David Klee, founder and chief architect of Heraflux Technologies, said SQL Server 2005 end of life presents the opportunity DBAs need to take stock of their databases and make changes based on what newer versions of SQL Server have to offer.

SearchSQLServer spoke to Narasimhan and Klee about the best way for DBAs to take advantage of the opportunity that the end of support creates.

This is the first part of a two-part article. Click here for the second part.

How can we turn SQL Server 2005 end of life into an opportunity for DBAs?

The end of life gives you a chance to look back at all of the innovations on the database side and on the infrastructure side as well.

Bala Narasimhanvice president of products at PernixData

David Klee: I’ve been a DBA. I’ve been a system administrator. I’ve been an IT manager and an architect, and a lot of these different components overlap. My biggest take on it, from the role of the DBA, is that their number one job is to make sure that the data is there when you need it. Secondly, it’s about performance. The upgrade process is, in my eyes, wonderful, because the new versions of SQL Server 2012 and 2014, soon to be 2016, give you a lot more options for enterprise level availability. [They simplify] things. [They give] you better uptime. [They give] you better resiliency to faults. These are features that are just included with [them].

What this is doing is giving people a good opportunity to get the stragglers out of their environment. I’m out in the field a lot. I do see a lot of 2005 machines out here. It’s one of those things where the management mindset is: “If it’s not broke, don’t fix it.” But, with end of life, end of support it’s pretty significant

PRO+

Content

Find more PRO+ content and other member only offers, here.


  • E-Handbook

    Shining a light on SQL Server storage tactics

Bala Narasimhan: I’m similar to David in terms of background, except that I did R&D at Oracle and other database companies. Since 2005, there has been a lot of innovation that has happened at a lot of database companies on the database side itself, but also on the infrastructure side holding these databases. I think it’s an opportunity to leverage all of that innovation as well. The end of life gives you a chance to look back at all of the innovations on the database side and on the infrastructure side as well. Sometimes, those innovations are complementary and sometimes they’re not. It gives you an opportunity to evaluate those and see what’s right for you in 2016.

In [SQL Server] 2014, there are features such as the columnstore and in-memory computing and all of that. … It may be the case that you can leverage similar functionality without having to upgrade to 2014, because there are other innovations happening in the industry. This may be another example of where you can step back and [ask yourself], “Should I upgrade to 2014 to get there? Or should I upgrade to 2012 because I don’t need it? Or is there another way to get the same capability?”

We’re both advocating for the right to tool for the job.

Klee: Exactly. I don’t think that there is a specific answer to that. I think it depends on what that particular DBA wants and what that particular business is trying to achieve. There are multiple ways to achieve that and this is giving you an opportunity to evaluate that.

What are your suggestions for how DBAs can best take advantage of this upgrade?

Narasimhan: This is a time to take a step back. I would recommend having a conversation that includes the DBA; the storage admin; and, if they’re virtualized, the virtualization admin as well and try to understand what all three are trying to achieve because, at the end of the day, you need to run the database on some kind of infrastructure. In 2005, it needn’t have been virtualized, but, in today’s world, it will most probably be virtualized. So, bring them all to the table and try to understand what they need to do from a database perspective and an infrastructure perspective.

Once you’ve done that, there are other conversations to have, such as: “Do we want to run an application rewrite?” For instance, if you’re going to upgrade from 2005 to 2014 because you want to leverage the in-memory capabilities of SQL Server, then you need to revisit your database schema. You need to potentially rewrite your application. There are the cardinality estimation changes that will cause a rewrite. Do you want to incur those costs? Sometimes the answer may be yes and sometimes the answer may be no. If so, it’s not required that you go to 2014. You can go to 2012.

Similarly, it’s a chance to say this application has evolved over time. The optimizer has changed in SQL Server. Therefore the I/O capabilities have changed. Maybe we should talk to the storage admin and the virtualization admin and figure out what kind of infrastructure we’ll need to support this application successfully post-upgrade.

I will, therefore, open up the conversation a little bit and bring other stakeholders to the table before deciding which way to go.

Klee: My take on it is pretty much aligned with that. It’s, essentially, look at the architectural choices that went into that legacy deployment — high availability, performance, virtualization or no virtualization. Revisit today and see if the technology has changed, or you can simplify some of those choices or even replace them with features that weren’t even around back in the day. Availability Groups, virtualization, even public cloud deployments, any of the in-memory technologies, they were just not around back in the 2005 days, and now they’re just extremely powerful and extremely useful.

 

 

[Source:- searchsqlserver]

SQL Server 2016 features to make DBAs’ lives easier

Image result for SQL Server 2016 features to make DBAs' lives easier

SQL Server 2016 is about to launch with a long list of shiny new built-in features along with much-needed improvements…

to important but humdrum capabilities database administrators rely on.

The upcoming release, slated for June 1, marks Microsoft’s initial go at a cloud-first version of SQL Server. It also happens to be one of the biggest releases in its history, with something for everyone, said Andrew Snodgrass, a research vice president at Directions on Microsoft, an independent analysis firm in Kirkland, Wash.

Some of the most notable SQL Server 2016 features include performance tuning, real-time operational analytics, visualization on mobile devices, and new hybrid support that allows admins to run their databases on premises and on public cloud services. Microsoft also invested in less sexy, but important, SQL Server 2016 features that hadn’t been improved in some time.

SSRS and SSIS finally get some love

Indeed, SQL Server 2016 is an exciting release for reporting and ETL practitioners, according to Tim Mitchell, principal at Tyleris Data Solutions, a data management services provider in Dallas.

SQL Server Reporting Services (SSRS), long suffering from release after release of few remarkable changes, received a significant makeover, he said. The classic Report Manager interface has given way to a brand new portal that looks and acts like a modern Web app — and it’s brandable, he noted.

The new KPI functionality makes building dashboards much easier, and the mobile reporting tools Microsoft added from the 2015 Datazen acquisition have made SSRS relevant for companies that support reporting for mobile users, according to Mitchell.

Performance tuning with the new Query Store is one of those ‘about time’ solutions.

Andrew Snodgrassresearch vice president, Directions on Microsoft

The changes in SQL Server Integration Services (SSIS) are more subtle, but significant. When the SSIS catalog was introduced in 2012, it brought many changes but one significant limitation: SSIS packages could no longer be deployed individually; instead, the entire project had to be deployed at once, said Mitchell, who is also a data platform MVP. 

“To their credit, Microsoft heard the roar of negative feedback and have changed this in 2016, once again allowing package-by-package deployment,” he said.

For those boxed in by the limitations of SSIS catalog logging, a new feature that supports custom logging levels brings freedom. Also, for those who were previously forced to install multiple versions of SQL Server Data Tools to support the various versions of SSIS, the new SQL Server Data Tool designer allows for targeting of a specific SQL Server Integration Services version when developing SSIS projects, Mitchell said.

Performance tuning, In-Memory OLTP and PolyBase

Perhaps the most useful SQL Server 2016 feature for database administrators involves performance tuning, which allows DBAs to monitor and record the full history of query execution plans to diagnose issues and optimize plans. It will be invaluable for upgrades and patching to see where changes have impacted performance, Directions on Microsoft’s Snodgrass said.

“Performance tuning with the new Query Store is one of those ‘about time’ solutions,” he added.

Other notable improvements to SQL Server 2016 are PolyBase integration, and performance features with In-Memory OLTP and columnstore indexes are finally mature enough for most companies to deploy them, according to Snodgrass.

“The supported feature set, as compared to on-disk tables, was not on parity and it made it difficult to migrate to In-Memory tables without a great deal of effort,” he said.

In addition, Microsoft raised the size limit on memory-optimized tables to 2 TB, and those memory-optimized tables can be edited. Another important SQL Server 2016 feature is the ability to combine In-Memory OLTP and columnstore indexes on a single table.

“It’s not for everyone, but there are cases where it would be great to have real-time statistics and trends available from live, transactional data,” Snodgrass said. “Right now the process is time-delayed, since it usually requires grabbing transactions at a point in time and performing analysis somewhere other than on the transactional table.”

However, Snodgrass cautioned, DBAs shouldn’t try this without the proper infrastructure. “You’d better have beefy equipment and failover configured before trying this,” he said.

PolyBase, which provides the ability to access unstructured data in Hadoop, has been in specialized versions of SQL Server since 2012. It will be included in SQL Server 2016 Enterprise edition. That means organizations that didn’t want to spend the money on big equipment can now use existing SQL Server installations to pull unstructured data, Snodgrass said.

“Of course, that doesn’t immediately solve the problem of deploying Hadoop, but it is good for the SQL guys,” he added.

JSON, live queries and analytics

JSON support is an important feature because it allows users to read and write JSON-based documents. This provides a controlled gateway for sharing organizational data with more mobile platforms. Companies have struggled to write database apps for mobile devices, because the data storage options weren’t compatible with on-premises data platforms, Snodgrass said.

“This provides a much easier method for transporting that data between mobile/Web solutions and relational database applications,” he said.

Other SQL Server 2016 features users are excited about are Query Store, Live Query Statistics and Live Query Plans (in Management Studio), according to Gareth Swanepoel, a senior data platform and BI consultant at Pragmatic Works Inc., a SQL Server software and training provider in Middleburg, Fla.

“These [Query features] represent a major improvement to performance tuning on a system,” Swanepoel said. “DBAs will have access to vastly enhanced metrics.”

In addition, SQL Server Management Studio’s release schedule has been separated from the main SQL Server releases, and it will be updated more frequently than before.

Perhaps least impressive of the new SQL Server 2016 features, according to Snodgrass, is SQL Server R Services, which supports advanced analytics with the R programming language.

“The ability to incorporate R scripts in stored procedures is interesting, but the audience is very limited and other tools out there do a good job of this,” he said. “It’s important for the long term, but I suspect adoption will be slow in the beginning.”

SQL Server 2016 editions will include Enterprise, Standard, Express and Developer. The SQL Server 2016 Developer edition, with the full capabilities of the latest SQL Server release, will be free.

 

 

[Source:- searchsqlserver]