‘Trump & Dump’ program aims to profit off Trump tweets

The "Trump & Dump" artificial intelligence program identifies Trump's market-moving tweets, assesses instantaneous

Techies have devised a program to execute quickfire stock trades to take advantage of President Donald Trump’s Twitter habit of blackballing individual companies.

And the president’s tweets are saving puppies, since when the program earns money, the funds are donated to an animal welfare group.

The “Trump & Dump” artificial intelligence program identifies Trump’s market-moving tweets, assesses instantaneously whether the sentiment is positive or negative and then executes a speedy trade.

Ben Gaddis, president of Austin, Texas marketing and technology company T3, said the idea was sparked by watching Trump’s actions during his transition, when twitter attacks of companies such as Boeing and Lockheed Martin sent the share prices tumbling.

“Everyone is asking themselves how to deal with the unpredictability of Trump’s tweets,” Gaddis told AFP. T3’s response was to develop a “bot,” a piece of software that does automated tasks, to trade on the information.

The company has so far been pleased with the results, which yielded “significant winnings” on two occasions and a “slight” loss on a third trade, Gaddis said.

In early January, T3 scored a “huge” profit by betting Toyota’s share price would fall after Trump lambasted the automaker for building cars in Mexico, it said in a short video on the T3 website.

The time lag between the Trump tweet and T3 trade was only a second, according to a short video on the T3 website.

T3, which has pictures of numerous dogs on its website and describes itself as having “dog friendly offices” is donating the earnings from the bot-directed trades to American Society for the Prevention of Cruelty to Animals (ASPCA).

“So now, when President Trump tweets, we save a puppy,” the video.

 

 

 

[Source:- Phys.org]

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.

 

This system, which can be downloaded for free from www.3dcurator.es, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (patrimonio3d.ugr.es/).

 

 

[Source:- Phys.org]

 

Oracle to Java devs: Stop signing JAR files with MD5

Oracle to Java devs: Stop signing JAR files with MD5

Starting in April, Oracle will treat JAR files signed with the MD5 hashing algorithm as if they were unsigned, which means modern releases of the Java Runtime Environment (JRE) will block those JAR files from running. The shift is long overdue, as MD5’s security weaknesses are well-known, and more secure algorithms should be used for code signing instead.

“Starting with the April Critical Patch Update releases, planned for April 18, 2017, all JRE versions will treat JARs signed with MD5 as unsigned,” Oracle wrote on its Java download page.

Code-signing JAR files bundled with Java libraries and applets is a basic security practice as it lets users know who actually wrote the code, and it has not been altered or corrupted since it was written. In recent years, Oracle has been beefing up Java’s security model to better protect systems from external exploits and to allow only signed code to execute certain types of operations. An application without a valid certificate is potentially unsafe.

Newer versions of Java now require all JAR files to be signed with a valid code-signing key, and starting with Java 7 Update 51, unsigned or self-signed applications are blocked from running.

Code signing is an important part of Java’s security architecture, but the MD5 hash weakens the very protections code signing is supposed to provide. Dating back to 1992, MD5 is used for one-way hashing: taking an input and generating a unique cryptographic representation that can be treated as an identifying signature. No two inputs should result in the same hash, but since 2005, security researchers have repeatedly demonstrated that the file could be modified and still have the same hash in collisions attacks. While MD5 is no longer used for TLS/SSL—Microsoft deprecated MD5 for TLS in 2014—it remains prevalent in other security areas despite its weaknesses.

With Oracle’s change, “affected MD-5 signed JAR files will no longer be considered trusted [by the Oracle JRE] and will not be able to run by default, such as in the case of Java applets, or Java Web Start applications,” Erik Costlow, an Oracle product manager with the Java Platform Group, wrote back in October.

Developers need to verify that their JAR files have not been signed using MD5, and if it has, re-sign affected files with a more modern algorithm. Administrators need to check with vendors to ensure the files are not MD5-signed. If the files are still running MD5 at the time of the switchover, users will see an error message that the application could not go. Oracle has already informed vendors and source licensees of the change, Costlow said.

In cases where the vendor is defunct or unwilling to re-sign the application, administrators can disable the process that checks for signed applications (which has serious security implications), set up custom Deployment Rule Setsfor the application’s location, or maintain an Exception Site List, Costlow wrote.

There was plenty of warning. Oracle stopped using MD5 with RSA algorithm as the default JAR signing option with Java SE6, which was released in 2006. The MD5 deprecation was originally announced as part of the October 2016 Critical Patch Update and was scheduled to take effect this month as part of the January CPU. To ensure developers and administrators were ready for the shift, the company has decided to delay the switch to the April Critical Patch Update, with Oracle Java SE 8u131 and corresponding releases of Oracle Java SE 7, Oracle Java SE 6, and Oracle JRockit R28.

“The CA Security Council applauds Oracle for its decision to treat MD5 as unsigned. MD5 has been deprecated for years, making the move away from MD5 a critical upgrade for Java users,” said Jeremy Rowley, executive vice president of emerging markets at Digicert and a member of the CA Security Council.

Deprecating MD5 has been a long time coming, but it isn’t enough. Oracle should also look at deprecating SHA-1, which has its own set of issues, and adopt SHA-2 for code signing. That course of action would be in line with the current migration, as major browsers have pledged to stop supporting websites using SHA-1 certificates. With most organizations already involved with the SHA-1 migration for TLS/SSL, it makes sense for them to also shift the rest of their certificate and key signing infrastructure to SHA-2.

The good news is that Oracle plans to disable SHA-1 in certificate chains anchored by roots included by default in Oracle’s JDK at the same time MD5 gets deprecated, according to the JRE and JDK Crypto Roadmap, which outlines technical instructions and information about ongoing cryptographic work for Oracle JRE and Oracle JDK. The minimum key length for Diffie-Hellman will also be increased to 1,024 bits later in 2017.

The road map also claims Oracle recently added support for the SHA224withDSA and SHA256withDSA signature algorithms to Java 7, and disabled Elliptic Curve (EC) for keys of less than 256 bits for SSL/TLS for Java 6, 7, and 8.

 

 

[Source:- JW]

An app to crack the teen exercise code

An app to crack the teen exercise code

Pokémon GO has motivated its players to walk 2.8 billion miles. Now, a new mobile game from UVM researchers aims to encourage teens to exercise with similar virtual rewards.

The game, called “Camp Conquer,” is the brainchild of co-principal investigators Lizzy Pope, assistant professor in the Department of Nutrition and Food Science, and Bernice Garnett, assistant professor of education in the College of Education and Social Services, both of the University of Vermont. The project is one of the first in the area of gamification and obesity, and will test launch with 100 Burlington High School students this month.

Here’s how it works: Real-world physical activity, tracked by a Fitbit, translates into immediate rewards in the game, a capture-the-flag-style water balloon battle with fun, summer camp flair. Every step a player takes in the real world improves their strength, speed, and accuracy in the game. “For every hundred steps, you also get currency in the game to buy items like a special water balloon launcher or new sneakers for your avatar,” says Pope.

Helping Schools Meet Mandates

In 2014, Vermont established a requirement for students to get 30 minutes of physical activity during the school day (in addition to P.E. classes), a mark Pope says schools are struggling to hit. And it’s not just Vermont; according to the CDC, only 27 percent of high school students nationwide hit recommended activity goals, and 34 percent of US teens are overweight or obese.

Camp Conquer is a promising solution. The idea struck after Pope and Garnett visited Burlington High School, where they saw students playing lots of games on school-provided Chromebook laptops. Pope and Garnett approached Kerry Swift in UVM’s Office of Technology Commercialization for help. “I thought, if we’re going to make a game, it’s going to be legit,” says Pope.

Where Public Meets Private

The team is working with GameTheory, a local design studio whose mission is to create games that drive change. Pope says forming these types of UVM/private business partnerships to create technology that can be commercialized is the whole point of UVMVentures Funds, which partially support this project.

A key result of this public/private partnership, and of the cross-departmental collaboration between Pope and Garnett, was a methodology shift. Pope says it’s less common for health behavior researchers to involve their target demographic in “intervention design.” But Garnett, who has experience in community-based participatory research, and GameTheory, which commonly utilizes customer research, helped shift this. “Putting the experience of Bernice and GameTheory together, we came up with student focus groups to determine when they’re active, why they’re not, and what types of games they like to play,” says Pope. She believes this student input has Camp Conquer poised for success. “It gave us a lot of good insight, and created game champions.”

What does success look like? Pope says in her eyes, “it’s all about exciting kids to move more.” But another important aspect is the eventual commercialization of the app. “It could be widely disseminated at a very low cost. You could imagine a whole school district adopting the app,” says Pope. She expects that if the January test shows promise, GameTheory will take the game forward into the marketplace, and continue to update and improve it. “There’s definitely potential,” says Pope.

[Source:- Phys.org]

Google open-sources test suite to find crypto bugs

Google open-sources test suite to find crypto bugs

Working with cryptographic libraries is hard, and a single implementation mistake can result in serious security problems. To help developers check their code for implementation errors and find weaknesses in cryptographic software libraries, Google has released a test suite as part of Project Wycheproof.

“In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long,” Google security engineers Daniel Bleichenbacher and Thai Duong, wrote in a post announcing the project on the Google Security blog.

Named after Australia’s Mount Wycheproof, the world’s smallest mountain, Wycheproof provides developers with a collection of unit tests that detect known weaknesses in cryptographic algorithms and check for expected behaviors. The first set of tests is written in Java because Java has a common cryptographic interface and can be used to test multiple providers.

“We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means,” Bleichenbacker and Duong wrote.

The suite can be used to test such cryptographic algorithms as RSA, elliptic curve cryptography, and authenticated encryption, among others. The project also has ready-to-use tools to check Java Cryptography Architecture providers, such as Bouncy Castle and the default providers in OpenJDK. The engineers said they are converting the tests into sets of test vectors to simplify the process of porting them to other languages.

The tests in this release are low-level and should not be used directly, but they still can be applied for testing the algorithms against publicly known attacks, the engineers said. For example, developers can use Wycheproof to verify whether algorithms are vulnerable to invalid curve attacks or biased nonces in digital signature schemes.

So far the project has been used to run more than 80 test cases and has identified 40-plus vulnerabilities, including one issue where the private key of DSA and ECDHC algorithms could be recovered under specific circumstances. The weakness in the algorithm was present because libraries were not checking the elliptic curve points they received from outside sources.

“Encodings of public keys typically contain the curve for the public key point. If such an encoding is used in the key exchange, then it is important to check that the public and secret key used to compute the shared ECDH secret are using the same curve. Some libraries fail to do this check,” according to the available documentation.

Cryptographic libraries can be quite difficult to implement, and attackers frequently look for weak cryptographic implementations rather than trying to break the actual mathematics underlying the encryption. With Wycheproof, developers and users can check their libraries against a large number of known attacks without having to dig through academic papers to find out what kind of attacks they need to worry about.

The engineers looked through public cryptographic literature and implemented known attacks to build the test suite. However, developers should not consider the suite to be comprehensive or able to detect all weaknesses, because new weaknesses are always being discovered and disclosed.

“Project Wycheproof is by no means complete. Passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tries to detect,” the engineers wrote.

Wycheproof comes two weeks after Google released a fuzzer to help developers discover programming errors in open source software. Like OSS-Fuzz, all the code for Wycheproof is available on GitHub. OSS-Fuzz is still in beta, but it has already worked through 4 trillion test cases and uncovered 150 bugs in open source projects since it was publicly announced.

 

 

[Source:- JW]

Microsoft rolls out SQL Server 2016 with a special deal to woo Oracle customers

Microsoft has released SQL Server 2016.

The next version of Microsoft’s SQL Server relational database management system is now available, and along with it comes a special offer designed specifically to woo Oracle customers.

Until the end of this month, Oracle users can migrate their databases to SQL Server 2016 and receive the necessary licenses for free with a subscription to Microsoft’s Software Assurance maintenance program.

Microsoft announced the June 1 release date for SQL Server 2016 early last month. Among the more notable enhancements it brings are updateable, in-memory column stores and advanced analytics. As a result, applications can now deploy sophisticated analytics and machine learning models within the database at performance levels as much as 100 times faster than what they’d be outside it, Microsoft said.

The software’s new Always Encrypted feature helps protect data at rest and in memory, while Stretch Database aims to reduce storage costs while keeping data available for querying in Microsoft’s Azure cloud. A new Polybase tool allows you to run queries on external data in Hadoop or Azure blob storage.

Also included are JSON support, “significantly faster” geospatial query support, a feature called Temporal Tables for “traveling back in time” and a Query Store for ensuring performance consistency.

SQL Server 2016 features were first released in Microsoft Azure and stress-tested through more than 1.7 million Azure SQL DB databases. The software comes in Enterprise and Standard editions along with free Developer and Express versions.

Support for SQL Server 2005 ended in April.

Though Wednesday’s announcement didn’t mention it, Microsoft previously said it’s planning to bring SQL Server to Linux. That version is now due to be released in the middle of next year, Microsoft said.

 

[Source:- Infoworld]

 

Squirrel ‘threat’ to critical infrastructure

Grey squirrel

The real threat to global critical infrastructure is not enemy states or organisations but squirrels, according to one security expert.

Cris Thomas has been tracking power cuts caused by animals since 2013.

Squirrels, birds, rats and snakes have been responsible for more than 1,700 power cuts affecting nearly 5 million people, he told a security conference.

He explained that by tracking these issues, he was seeking to dispel the hype around cyber-attacks.

His Cyber Squirrel 1 project was set up to counteract what he called the “ludicrousness of cyber-war claims by people at high levels in government and industry”, he told the audience at the Shmoocon security conference in Washington.

Squirrels topped the list with 879 “attacks”, followed by:

  • birds – 434
  • snakes – 83
  • raccoons – 72
  • rats – 36
  • martens – 22
  • frogs – three
  • He concludes that the damage done by real cyber-attacks – Stuxnet’s destruction of Iranian uranium enrichment centrifuges and disruption to Ukrainian power plants being the most high profile – was tiny compared to the “cyber-threat” posed by animals.

    Most of the animal “attacks” were on power cables but Mr Thomas also discovered that jellyfish had shut down a Swedish nuclear power plant in 2013, by clogging the pipes that carry cool water to the turbines.

    He also discovered that there have been eight deaths attributed to animal attacks on infrastructure, including six caused by squirrels downing power lines that then struck people on the ground.

    Mr Thomas – better known as SpaceRogue – set up Cyber Squirrel 1 as a Twitter feed in March 2013 and initially collected information from Google alerts.

  • It has since evolved into a much larger project – collecting information from search engines and other web sources.

    Mr Thomas only collected reports compiled in the English language and admitted that he was probably only capturing “a fraction” of animal-related power cuts worldwide.

    “The major difference between natural events, be they geological, meteorological or furry, is that cyber-attacks are deliberate orchestrated by humans,” said Luis Corrons, technical director of security firm PandaLabs.

    “While natural disasters are taken into account when critical infrastructure facilities are built, that’s not the case with computers. Most critical facilities were never designed to connect to the rest of the world, so the kind of security they implemented was taking care of the physical world surrounding them.

    “The number of potential attackers is growing, the number of potential targets is also going up. So we all need to reinforce our defences to the maximum – and also worry about squirrels.”

 
[Source:- BBC]

Snowflake now offers data warehousing to the masses

Snowflake now offers data warehousing to the masses

Snowflake, the cloud-based data warehouse solution co-founded by Microsoft alumnus Bob Muglia, is lowering storage prices and adding a self-service option, meaning prospective customers can open an account with nothing more than a credit card.

These changes also raise an intriguing question: How long can a service like Snowflake expect to reside on Amazon, which itself offers services that are more or less in direct competition — and where the raw cost of storage undercuts Snowflake’s own pricing for same?

Open to the public

The self-service option, called Snowflake On Demand, is a change from Snowflake’s original sales model. Rather than calling a sales representative to set up an account, Snowflake users can now provision services themselves with no more effort than would be needed to spin up an AWS EC2 instance.

In a phone interview, Muglia discussed how the reason for only just now transitioning to this model was more technical than anything else. Before self-service could be offered, Snowflake had to put protections into place to ensure that both the service itself and its customers could be protected from everything from malice (denial-of-service attacks) to incompetence (honest customers submitting massively malformed queries).

“We wanted to make sure we had appropriately protected the system,” Muglia said, “before we opened it up to anyone, anywhere.”

This effort was further complicated by Snowflake’s relative lack of hard usage limits, which Muglia characterized as being one of its major standout features. “There is no limit to the number of tables you can create,” Muglia said, but he further pointed out that Snowflake has to strike a balance between what it can offer any one customer and protecting the integrity of the service as a whole.

“We get some crazy SQL queries coming in our direction,” Muglia said, “and regardless of what comes in, we need to continue to perform appropriately for that customer as well as other customers. We see SQL queries that are a megabyte in size — the query statements [themselves] are a megabyte in size.” (Many such queries are poorly formed, auto-generated SQL, Muglia claimed.)

Fewer costs, more competition

The other major change is a reduction in storage pricing for the service — $30/TB/month for capacity storage, $50/TB/month for on-demand storage, and uncompressed storage at $10/TB/month.

It’s enough of a reduction in price that Snowflake will be unable to rely on storage costs as a revenue source, since those prices barely pay for the use of Amazon’s services as a storage provider. But Muglia is confident Snowflake is profitable enough overall that such a move won’t impact the company’s bottom line.

“We did the data modeling on this,” said Muglia, “and our margins were always lower on storage than on compute running queries.”

According to the studies Snowflake performed, “when customers put more data into Snowflake, they run more queries…. In almost every scenario you can imagine, they were very much revenue-positive and gross-margin neutral, because people run more queries.”

The long-term implications for Snowflake continuing to reside on Amazon aren’t clear yet, especially since Amazon might well be able to undercut Snowflake by directly offering competitive services.

Muglia, though, is confident that Snowflake’s offering is singular enough to stave off competition for a good long time, and is ready to change things up if need be. “We always look into the possibility of moving to other cloud infrastructures,” Muglia said, “although we don’t have plans to do it right now.”

He also noted that Snowflake competes with Amazon and Redshift right now, but “we have a very different shape of product relative to Redshift…. Snowflake is storing multiple petabytes of data and is able to run hundreds of simultaneous concurrent queries. Redshift can’t do that; no other product can do that. It’s that differentiation that allows to effective compete with Amazon, and for that matter Google and Microsoft and Oracle and Teradata.”

 

 

[Source:- IW]

New framework uses Kubernetes to deliver serverless app architecture

New framework uses Kubernetes to deliver serverless app architecture

A new framework built atop Kubernetes is the latest project to offer serverless or AWS Lambda-style application architecture on your own hardware or in a Kubernetes-as-a-service offering.

The Fission framework keeps the details about Docker and Kubernetes away from developers, allowing them to concentrate on the software rather than the infrastructure. It’s another example of Kubernetes becoming a foundational technology.

Some assembly, but little container knowledge, required

Written in Go and created by managed-infrastructure provider Platform9, Fission works in conjunction with any Kubernetes cluster. Developers write functions that use Fission’s API, much the same as they would for AWS Lambda. Each function runs in what’s called an environment, essentially a package for the language runtime. Triggers are used to map functions to events; HTTP routes are one common trigger.

Fission lets users effortlessly leverage Kubernetes and Docker to run applications on it. Developers don’t need to know intimate details about Docker or Kubernetes simply to ensure the application can run well. Likewise, developers don’t have to build app containers, but they can always use a prebuilt container if needed, especially if the app is larger and more complex than a single function can encapsulate.

Fission’s design allows applications to be highly responsive to triggers. When launched, Fission creates a pool of “prewarmed” containers ready to receive functions. According to Fission’s developers, this means an average of 100 milliseconds for the “cold start” of an application, although that figure will likely be dependent on the deployment and the hardware.

We’re just getting warmed up!

A few good clues indicate what Fission’s developers want to do with the project in the future. For one, the plan includes being as language- and runtime-agnostic as possible. Right now the only environments (read: runtimes) that ship with Fission are for Node.js and Python, but new ones can be added as needed, and existing ones can be modified freely. “An environment is essentially just a container with a web server and dynamic loader,” explains Fission’s documentation.

Another currently underdeveloped area that will be expanded in future releases: The variety of triggers available to Fission. Right now, HTTP routes are the only trigger type that can be used, but plans are on the table to add other triggers, such as Kubernetes events.

 

 

[Source:- Javaworld]

Azure brings SQL Server Analysis Services to the cloud

Azure brings SQL Server Analysis Services to the cloud

SQL Server Analysis Services, one of the key features of Microsoft’s relational database enterprise offering, is going to the cloud. The company announced Tuesday that it’s launching the public beta of Azure Analysis Services, which gives users cloud-based access to semantic data modeling tools.

The news is part of a host of announcements the company is making at the Professional Association for SQL Server Summit in Seattle this week. On top of the new cloud service, Microsoft also released new tools for migrating to the latest version of SQL Server and an expanded free trial for Azure SQL Data Warehouse. On the hardware side, the company revealed new reference architecture for using SQL Server 2016 with active data sets of up to 145TB.

The actions are all part of Microsoft’s continued investment in the company’s relational database product at a time when it’s trying to get customers to move to its cloud.

Azure Analysis Services is designed to help companies get the benefits of cloud processing for semantic data modeling, while still being able to glean insights from data that’s stored either on-premises or in the public cloud. It’s compatible with databases like SQL Server, Azure SQL Database, Azure SQL Data Warehouse, Oracle and Teradata. Customers that already use SQL Server Analysis Services in their private data centers can take the models from that deployment and move them to Azure, too.

One of the key benefits to using Azure Analysis Services is that it’s a fully managed service. Microsoft deals with the work of figuring out the compute resources underpinning the functionality, and users can just focus on the data.

Like its on-premises predecessor, Azure Analysis Services integrates with Microsoft’s Power BI data visualization tools, providing additional modeling capabilities that go beyond what that service can offer. Azure AS can also connect to other business intelligence software, like Tableau.

Microsoft also is making it easier to migrate from an older version of its database software to SQL Server 2016.  To help companies evaluate the difference between their old version of SQL Server and the latest release, Microsoft has launched the  Database Experimentation Assistant.

Customers can use the assistant to run experiments across different versions of the software, so they can see what if any benefits they’ll get out of the upgrade process while also helping to reduce risk. The Data Migration Assistant, which is supposed to help move workloads, is also being upgraded.

For companies that have large amounts of data they want to store in a cloud database, Microsoft is offering an expanded free trial of Azure SQL Data Warehouse. Users can sign up starting on Tuesday, and get a free month of use. Those customers who want to give it a shot will have to move quickly, though: Microsoft is only taking trial sign-ups until December 31.

Microsoft Corporate Vice President Joseph Sirosh said in an interview that the change to the Azure SQL Data Warehouse trial was necessary because setting up the system to work with actual data warehouse workloads would blow through the typical Azure free trial. Giving people additional capacity to work with should let them have more of an opportunity to test the service before committing to a large deployment.

All of this SQL news comes a little more than a month before AWS Re:Invent, Amazon’s big cloud conference in Las Vegas. It’s likely that we’ll see Amazon unveil some new database products at that event, continuing the ongoing cycle of competition among database vendors in the cloud.

 

 

[Source:- IW]