Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.

 

This system, which can be downloaded for free from www.3dcurator.es, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (patrimonio3d.ugr.es/).

 

 

[Source:- Phys.org]

 

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

AI tools came out of the lab in 2016

Roboy angry robot

You shouldn’t anthropomorphize computers: They don’t like it.

That joke is at least as old as Deep Blue’s 1997 victory over then world chess champion Garry Kasparov, but even with the great strides made in the field of artificial intelligence over that time, we’re still not much closer to having to worry about computers’ feelings.

Computers can analyze the sentiments we express in social media, and project expressions on the face of robots to make us believe they are happy or angry, but no one seriously believes, yet, that they “have” feelings, that they can experience them.

Other areas of AI, on the other hand, have seen some impressive advances in both hardware and software in just the last 12 months.

Deep Blue was a world-class chess opponent — and also one that didn’t gloat when it won, or go off in a huff if it lost.

Until this year, though, computers were no match for a human at another board game, Go. That all changed in March when AlphaGo, developed by Google subsidiary DeepMind, beat Lee Sedol, then the world’s strongest Go player, 4-1 in a five-match tournament.

AlphaGo’s secret weapon was a technique called reinforcement learning, where a program figures out for itself which actions bring it closer to its goal, and reinforces those behaviors, without the need to be taught by a person which steps are correct. That meant that it could play repeatedly against itself and gradually learn which strategies fared better.

Reinforcement learning techniques have been around for decades, too, but it’s only recently that computers have had sufficient processing power (to test each possible path in turn) and memory (to remember which steps led to the goal) to play a high-level game of Go at a competitive speed.

Better performing hardware has moved AI forward in other ways too.

In May, Google revealed its TPU (Tensor Processing Unit), a hardware accelerator for its TensorFlow deep learning algorithm. The ASICs (application-specific integrated circuit) can execute the types of calculations used in machine learning much faster and using less power than even GPUs, and Google has installed several thousand of them in its server racks in the slots previously reserved for hard drives.

The TPU, it turns out, was one of the things that made AlphaGo so fast, but Google has also used the chip to accelerate mapping and navigation functions in Street View and to improve search results with a new AI tool called RankBrain.

Google is keeping its TPU to itself for now, but others are releasing hardware tuned for AI applications. Microsoft, for example, has equipped some of its Azure servers with FPGAs (field-programmable gate arrays) to accelerate certain machine learning functions, while IBM is targeting similar applications with a range of PowerAI servers that use custom hardware to link its Power CPUs with Nvidia GPUs.

For businesses that want to deploy cutting-edge AI technologies without developing everything from scratch themselves, easy access to high-performance hardware is a start, but not enough. Cloud operators recognize that, and are also offering AI software as a service. Amazon Web Services and Microsoft’s Azure have both added machine learning APIs, while IBM is building a business around cloud access to its Watson AI.

The fact that these hardware and software tools are cloud-based will help AI systems in other ways too.

Being able to store and process enormous volumes of data is only useful to the AI that has access to vast quantities of data from which to learn — data such as that collected and delivered by cloud services, for example, whether its information about the weather, mail order deliveries, requests for rides or peoples’ tweets.

Access to all that raw data, rather than the minute subset, processed and labelled by human trainers, that was available to previous generations of AIs, is one of the biggest factors transforming AI research today, according to a Stanford University study of the next 100 years in AI.

And while having computers watch everything we do, online and off, in order to learn how to work with us might seem creepy, it’s really only in our minds. The computers don’t feel anything. Yet.

 

[Source:- JW]

 

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Hands-on: The Honor Magic looks out of this world

We saw the launch of a couple of concept smartphones from Chinese OEMs at the tail end of 2016, with one being the bezel-less Xiaomi Mi Mix, while the other was a part of the Honor series by Huawei. Here at CES 2017, we got to spend some time with the latter. Here is a closer look at the Honor Magic!

The first thing that will stand out to you about the Honor Magic is its design. The device features curved glass on both sides as well as the top on the front and back, and with a metal frame to go along with it, the design and build quality of the Magic is absolutely top notch.

We saw the launch of a couple of concept smartphones from Chinese OEMs at the tail end of 2016, with one being the bezel-less Xiaomi Mi Mix, while the other was a part of the Honor series by Huawei. Here at CES 2017, we got to spend some time with the latter. Here is a closer look at the Honor Magic!

The first thing that will stand out to you about the Honor Magic is its design. The device features curved glass on both sides as well as the top on the front and back, and with a metal frame to go along with it, the design and build quality of the Magic is absolutely top notch.

The Honor Magic features some impressive hardware as well. Up front is a 5-inch display with a Quad HD resolution, which some may consider overkill for a screen of this size. Under the hood is an in-house 2.3 GHz octa-core processor backed by 4 GB of RAM. You get 64 GB of internal storage, and keeping everything running is a 2,900 mAh battery. On the back is a 12 MP dual camera setup, and up front is a 8 MP shooter.

Apart from the fantastic and unique design, another big feature that sets the Honor Magic apart from the rest is the built-in artificial intelligence that works by taking advantage of existing sensors like the proximity sensor. Additional sensors include an infrared sensor, and even the metal frame itself works as one.

There is a lot the AI can do with this package of sensors, such as have the device automatically wake up and turn on the display whenever you pick up the phone. The device also has the ability to scan your face using the infrared sensor in order to recognize you, and it will only show you notifications on the lockscreen if it knows that it’s you that has picked up the phone. It’s a great way to avoid other people being able to see snippets of your messages and emails on the lockscreen, and the impressive part is that it can be set up to work with or without spectacles.

On the software side of things, the Honor Magic is running Android 6.0 Marshmallow, but instead of the regular Emotion UI that you may be familiar with from other Honor smartphones, the Magic is running what Honor is calling the Magic Live UI. This is what allows a lot of the AI to work, and it is quite a different experience not only from Huawei’s own user interface, but any software experience you may be used to.

It is a lot cleaner and much more simplistic take on Android, and it doesn’t feel cartoon-ish or intrusive. There are some useful features baked in of course, including some Google Now-esque actions. For example, a tracking number for a package you are waiting for, or a boarding pass for a flight you have to catch will pop up on your phone when the time is right.

 

[Source:- Androidauthority]

Safer, less vulnerable software is the goal of new computer publication

We can create software with 100 times fewer vulnerabilities than we do today, according to computer scientists at the National Institute of Standards and Technology (NIST). To get there, they recommend that coders adopt the approaches they have compiled in a new publication.

The 60-page document, NIST Interagency Report (NISTIR) 8151: Dramatically Reducing Software Vulnerabilities (link is external), is a collection of the newest strategies gathered from across industry and other sources for reducing bugs in software. While the report is officially a response to a request for methods from the White House’s Office of Science and Technology Policy, NIST computer scientist Paul E. Black says its contents will help any organization that seeks to author high-quality, low-defect computer code.

“We want coders to know about it,” said Black, one of the publication’s coauthors. “We concentrated on including novel ideas that they may not have heard about already.”

Black and his NIST colleagues compiled these ideas while working with software assurance experts from many private companies in the computer industry as well as several government agencies that generate a good deal of code, including the Department of Defense and NASA. The resulting document reflects their cumulative input and experience.

Vulnerabilities are common in software. Even small applications have hundreds of bugs (link is external) by some estimates. Lowering these numbers would bring many advantages, such as reducing the number of computer crashes and reboots users need to deal with, not to mention decreasing the number of patch updates they need to download.

The heart of the document, Black said, is five sets of approaches, tools and concepts that can help, all of which can be found in the document’s second section. The approaches are organized under five subheadings that, despite their jargon-heavy titles, each possess a common-sense idea as an overarching principle (see downloadable infographic).

These approaches include: using math-based tools to verify the code will work properly; breaking up a computer’s programs into modular parts so that if one part fails, the whole program doesn’t crash; connecting analysis tools for code that currently operate in isolation; using appropriate programming languages for the task that the code attempts to carry out; and developing evolving and changing tactics for protecting code that is the target of cyberattacks.

In addition to the techniques themselves, the publication offers recommendations for how the programming community can educate itself about where and how to use them. It also suggests that customers should request the techniques be used in development. “You as a consumer should be able to write it into a contract that you want a vendor to develop software in accordance with these principles, so that it’s as secure as it can be,” Black said.

Security is, of course, a major concern for almost everyone who uses technology these days, and Black said that the White House’s original request for these approaches was part of its 2016 Federal Cybersecurity R&D Strategic Action Plan, intended to be implemented over the next three to seven years. But though ideas of security permeate the document, Black said the strategies have an even broader intent.

“Security tends to bubble to the surface because we’ve got adversaries who want to exploit weaknesses,” he said, “but we’d still want to avoid bugs even without this threat. The effort to stymie them brings up general principles. You’ll notice the title doesn’t have the word ‘security’ in it anywhere.”

 

[Source:- SD]

Department of Labor sues Google over wage data

Google's Mountain View, California headquarters

The U.S. Department of Labor has filed a lawsuit against Google, with the company’s ability to win government contracts at risk.

The agency is seeking what it calls “routine” information about wages and the company’s equal opportunity program. The agency filed a lawsuit with its Office of Administrative Law Judges to gain access to the information, it announced Wednesday.

Google, as a federal contractor, is required to provide the data as part of a compliance check by the agency’s Office of Federal Contract Compliance Programs (OFCCP), according to the Department of Labor. The inquiry is focused on Google’s compliance with equal employment laws, the agency said.

“Like other federal contractors, Google has a legal obligation to provide relevant information requested in the course of a routine compliance evaluation,” OFCCP Acting Director Thomas Dowd said in a press release. “Despite many opportunities to produce this information voluntarily, Google has refused to do so.”

Google said it’s provided hundreds of thousands of records to the agency over the past year, including some related to wages. However, a handful of OFCCP data requests were “overbroad” or would reveal confidential data, the company said in a statement.

“We’ve made this clear to the OFCCP, to no avail,” the statement added. “These requests include thousands of employees’ private contact information which we safeguard rigorously.”

Google must allow the federal government to inspect and copy records relevant to compliance, the Department of Labor said. The agency requested the information in September 2015, but Google provided only partial responses, an agency spokesman said by email.

 

 

[Source:- Javaworld]

Asus ZenFone AR hands-on: Tango, Daydream, 8GB of RAM, oh my!

CES 2017 is in full swing and some of the coolest smartphone announcements at the show are coming from Asus. The Taiwanese manufacturer revealed a ZenFone 3variant equipped with dual cameras and optical zoom, but it’s actually the ZenFone AR that really piqued our interest, thanks to a combo of great specs and advanced features from Google.

The ZenFone AR is the first high-end Tango phone (and the second overall, after the Lenovo Phab 2 Pro), the first phone that supports Tango and Daydream VR, and the first smartphone with 8GB of RAM.

That’s a lot of premieres, so let’s take a closer look at what the Asus ZenFone brings to the table, live from CES 2017.

As mentioned, the ZenFone AR will be the second commercially available Tango-ready smartphone, but unlike the Phab 2 Pro the ZenFone AR is much sleeker looking, more manageable in the hand, and a lot less bulky.

The phone has a full metal frame that wraps around the entire perimeter of the phone and on the back there’s a very soft leather backing that feels extremely nice and also provides a lot of grip.

Also on the back is a 23MP camera, as well as the optical hardware needed to run Tango applications – this includes sensors for motion tracking and a depth sensing camera. The Tango module takes up the space where the fingerprint sensor is usually found on Asus phones, so the sensor is now placed on the front, embedded in the physical home button, which is flanked by capacitive keys.

If you’re still somehow not familiar with what Tango is, here is a very brief explanation. Tango is an augmented reality (AR) platform created by Google. Born from Google’s advanced technologies labs, Tango eventually graduated last year to become a real product. Tango-equipped phones can understand the physical space, by measuring the distance between the phone and objects in the real world. In practice, that means Tango phones can be used for AR applications like navigating through in-door spaces, but also for more recreational purposes like games. There are currently over 30 Tango apps in the Play Store, with dozens more coming this year.

Besides Tango, ZenFone AR also supports Daydream VR, Google’s virtual reality platform for mobile devices. As such, it’s compatible with Daydream View and other Daydream headsets and you’re pretty much guaranteed to have a good time using mobile VR applications on it.

The phone has all the specs you’d want on a VR-focused device, including a large, bright, and beautiful 5.7-inch Super AMOLED screen with Quad HD resolution and a Snapdragon 821 processor inside (sadly, it won’t get the brand-new Snapdragon 835, as we were hoping). The ZenFone AR will come with either 6GB of RAM or a whopping 8GB of RAM, a first for any smartphone.

All those hardware features will tax the system, so the ZenFone AR includes a vapor cooling system to help prevent the phone from overheating when using its AR and VR capabilities.

As all Daydream-ready devices, the ZenFone AR is running Android 7.0 Nougat, but not without Asus’ ZenUI customizations on top of it.

 

 

 

[Source:- Androidauthority]

Best practices for upgrading after SQL Server 2005 end of life

Image result for Best practices for upgrading after SQL Server 2005 end of life

2005 customers to upgrade, which begs the question: Which version should they upgrade to?

Joseph D’Antoni, principal consultant at Denny Cherry & Associates Consulting, recommended upgrading to the latest SQL Server version that supports the third-party applications a company is running. He said there are big changes in between each of the versions of SQL Server, adding that SQL Server 2014 is particularly notable for the addition of the cardinality estimator. According to D’Antoni, the cardinality estimator can “somewhat drastically” change query performance for some types of data. However, the testing process is the same for all of the versions, and the same challenges — testing, time and licensing — confront any upgrade. “You’re going to have a long testing process anyway. You might as well try to get the latest version, with the longest amount of support.”

“If it were me, right now, contending with 2005, I would go to 2014,” said Robert Sheldon, a freelance writer and technology consultant. “It’s solid, with lots of good features. There would be no reason to go with 2012, unless there were some specific licensing circumstances that were a factor.” Denny Cherry, founder and principal consultant at Denny Cherry & Associates Consulting, recommended upgrading to SQL Server 2012 at the earliest, if not 2014, because “at least they won’t have to worry about upgrading again anytime soon.”

Although SQL Server 2014 is the most current SQL Server version, SQL Server 2016 is in community technology preview. Sheldon said he doesn’t see upgrading to SQL Server 2016 as a good strategy. “Those who want to upgrade to SQL Server 2016 face a bit of a dilemma, because it is still in preview, and I have not yet heard of a concrete release date,” he said. “An organization could use CTP 3.0 to plan its upgrade strategy, but I doubt that is something I would choose to do.”

D’Antoni considered the possibility of waiting until the release of SQL Server 2016 to upgrade. “If they identify a feature that’s compelling, maybe they should wait for 2016,” he said. He added that “2016 is mature enough to roll,” and the only real problem is that it is currently unlicensed.

“If they’re already out of support and planning on moving to 2016, it could be worth waiting the few months,” Cherry said. Furthermore, Cherry said, waiting for SQL Server 2016 could save an organization from having to go through a second upgrade in the future.

Cherry added that, for everyone not waiting for SQL Server 2016, “If they haven’t started the project yet, they should get that project started quickly.” D’Antoni had an even more advanced timetable. He said a company “probably should have started already.” He added, “It’s the testing process that takes a lot of time. The upgrade process is fairly straightforward. Testing the application to make sure it works should have started fairly early.” Ideally, D’Antoni said, by this point, organizations should have done some initial application testing and be planning their migration.

A number of Cherry’s clients, ranging from small businesses to large enterprises, are upgrading because of the approaching SQL Server 2005 end of life. He described SQL Server 2005 end of life as affecting “every size, every vertical.” D’Antoni predicted the small organizations and the largest enterprises will be the hardest hit. The small corporations, he said, are likely to be using SQL Server 2005, because they lack the resources and IT personnel for an easy upgrade. The large enterprises, on the other hand, have so many systems that upgrades become difficult.

D’Antoni explained that, while it is possible to migrate to an Azure SQL database in the cloud instead of upgrading to a more advanced on-premises version of SQL Server, he doesn’t expect to see much of that — not because of difficulties with the product, but because of company culture. Companies who use the cloud, he said, are “more forward-thinking. If you’re still running 2005, you tend to be less on top of things like that.”

 

 
[Source:- searchsqlserver]

Why SQL Server 2005 end of life is good news for DBAs

The end of support for a product as wide-reaching as SQL Server can be a stressful time for the database administrators whose job it is to perform upgrades. However, two database experts see SQL Server 2005 end of life on April 12 as a blessing in disguise.

Bala Narasimhan, vice president of products at PernixData, and David Klee, founder and chief architect of Heraflux Technologies, said SQL Server 2005 end of life presents the opportunity DBAs need to take stock of their databases and make changes based on what newer versions of SQL Server have to offer.

SearchSQLServer spoke to Narasimhan and Klee about the best way for DBAs to take advantage of the opportunity that the end of support creates.

This is the first part of a two-part article. Click here for the second part.

How can we turn SQL Server 2005 end of life into an opportunity for DBAs?

The end of life gives you a chance to look back at all of the innovations on the database side and on the infrastructure side as well.

Bala Narasimhanvice president of products at PernixData

David Klee: I’ve been a DBA. I’ve been a system administrator. I’ve been an IT manager and an architect, and a lot of these different components overlap. My biggest take on it, from the role of the DBA, is that their number one job is to make sure that the data is there when you need it. Secondly, it’s about performance. The upgrade process is, in my eyes, wonderful, because the new versions of SQL Server 2012 and 2014, soon to be 2016, give you a lot more options for enterprise level availability. [They simplify] things. [They give] you better uptime. [They give] you better resiliency to faults. These are features that are just included with [them].

What this is doing is giving people a good opportunity to get the stragglers out of their environment. I’m out in the field a lot. I do see a lot of 2005 machines out here. It’s one of those things where the management mindset is: “If it’s not broke, don’t fix it.” But, with end of life, end of support it’s pretty significant

PRO+

Content

Find more PRO+ content and other member only offers, here.


  • E-Handbook

    Shining a light on SQL Server storage tactics

Bala Narasimhan: I’m similar to David in terms of background, except that I did R&D at Oracle and other database companies. Since 2005, there has been a lot of innovation that has happened at a lot of database companies on the database side itself, but also on the infrastructure side holding these databases. I think it’s an opportunity to leverage all of that innovation as well. The end of life gives you a chance to look back at all of the innovations on the database side and on the infrastructure side as well. Sometimes, those innovations are complementary and sometimes they’re not. It gives you an opportunity to evaluate those and see what’s right for you in 2016.

In [SQL Server] 2014, there are features such as the columnstore and in-memory computing and all of that. … It may be the case that you can leverage similar functionality without having to upgrade to 2014, because there are other innovations happening in the industry. This may be another example of where you can step back and [ask yourself], “Should I upgrade to 2014 to get there? Or should I upgrade to 2012 because I don’t need it? Or is there another way to get the same capability?”

We’re both advocating for the right to tool for the job.

Klee: Exactly. I don’t think that there is a specific answer to that. I think it depends on what that particular DBA wants and what that particular business is trying to achieve. There are multiple ways to achieve that and this is giving you an opportunity to evaluate that.

What are your suggestions for how DBAs can best take advantage of this upgrade?

Narasimhan: This is a time to take a step back. I would recommend having a conversation that includes the DBA; the storage admin; and, if they’re virtualized, the virtualization admin as well and try to understand what all three are trying to achieve because, at the end of the day, you need to run the database on some kind of infrastructure. In 2005, it needn’t have been virtualized, but, in today’s world, it will most probably be virtualized. So, bring them all to the table and try to understand what they need to do from a database perspective and an infrastructure perspective.

Once you’ve done that, there are other conversations to have, such as: “Do we want to run an application rewrite?” For instance, if you’re going to upgrade from 2005 to 2014 because you want to leverage the in-memory capabilities of SQL Server, then you need to revisit your database schema. You need to potentially rewrite your application. There are the cardinality estimation changes that will cause a rewrite. Do you want to incur those costs? Sometimes the answer may be yes and sometimes the answer may be no. If so, it’s not required that you go to 2014. You can go to 2012.

Similarly, it’s a chance to say this application has evolved over time. The optimizer has changed in SQL Server. Therefore the I/O capabilities have changed. Maybe we should talk to the storage admin and the virtualization admin and figure out what kind of infrastructure we’ll need to support this application successfully post-upgrade.

I will, therefore, open up the conversation a little bit and bring other stakeholders to the table before deciding which way to go.

Klee: My take on it is pretty much aligned with that. It’s, essentially, look at the architectural choices that went into that legacy deployment — high availability, performance, virtualization or no virtualization. Revisit today and see if the technology has changed, or you can simplify some of those choices or even replace them with features that weren’t even around back in the day. Availability Groups, virtualization, even public cloud deployments, any of the in-memory technologies, they were just not around back in the 2005 days, and now they’re just extremely powerful and extremely useful.

 

 

[Source:- searchsqlserver]