Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

Andrew Brust is senior director of market strategy and intelligence at Datameer.

 

 

[Source:- IW]

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.

 

This system, which can be downloaded for free from www.3dcurator.es, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (patrimonio3d.ugr.es/).

 

 

[Source:- Phys.org]

 

Microsoft SQL Server 2016 finally gets a release date

Microsoft SQL Server 2016 finally gets a release date

Database fans, start your clocks: Microsoft announced Monday that its new version of SQL Server will be out of beta and ready for commercial release on June 1.

The news means that companies waiting to pick up SQL Server 2016 until its general availability can start planning their adoption.

SQL Server 2016 comes with a suite of new features over its predecessor, including a new Stretch Database function that allows users to store some of their data in a database on-premises and send infrequently used  data to Microsoft’s Azure cloud. An application connected to a database using that feature can still see all the data from different sources, though.

Another marquee feature is the new Always Encrypted function, which makes it possible for users to encrypt data at the column level both at rest and in memory. That’s still only scratching the surface of the software, which also supports creating mobile business intelligence dashboards and new functionality for big data applications.

SQL Server 2016 will come in four editions: Enterprise, Standard, Developer and Express. The latter two will be available for free, similar to what Microsoft offered with SQL Server 2014.

In addition to its on-premises release, Microsoft will also have a virtual machine available on June 1 through its Azure cloud platform that will make it easy for companies to deploy SQL Server 2016 in the cloud.

Many of the new features in SQL Server 2016 like Always Encrypted and Stretch Database are already available in Microsoft’s Azure SQL Database managed service, but the virtual machine will be useful for companies that prefer to manage their own database infrastructure or that plan to roll out SQL Server 2016 on premises and want to test it in the cloud.

All of this comes a few months after Microsoft shocked the world by announcing that it would also release SQL Server on Linux in the future. That’s a powerful sign of Microsoft’s strategy of making its tools available to users on a wide variety of platforms, even those that the company doesn’t control.

 

 

[Source:- Infoworld]

Microsoft rolls out SQL Server 2016 with a special deal to woo Oracle customers

Microsoft has released SQL Server 2016.

The next version of Microsoft’s SQL Server relational database management system is now available, and along with it comes a special offer designed specifically to woo Oracle customers.

Until the end of this month, Oracle users can migrate their databases to SQL Server 2016 and receive the necessary licenses for free with a subscription to Microsoft’s Software Assurance maintenance program.

Microsoft announced the June 1 release date for SQL Server 2016 early last month. Among the more notable enhancements it brings are updateable, in-memory column stores and advanced analytics. As a result, applications can now deploy sophisticated analytics and machine learning models within the database at performance levels as much as 100 times faster than what they’d be outside it, Microsoft said.

The software’s new Always Encrypted feature helps protect data at rest and in memory, while Stretch Database aims to reduce storage costs while keeping data available for querying in Microsoft’s Azure cloud. A new Polybase tool allows you to run queries on external data in Hadoop or Azure blob storage.

Also included are JSON support, “significantly faster” geospatial query support, a feature called Temporal Tables for “traveling back in time” and a Query Store for ensuring performance consistency.

SQL Server 2016 features were first released in Microsoft Azure and stress-tested through more than 1.7 million Azure SQL DB databases. The software comes in Enterprise and Standard editions along with free Developer and Express versions.

Support for SQL Server 2005 ended in April.

Though Wednesday’s announcement didn’t mention it, Microsoft previously said it’s planning to bring SQL Server to Linux. That version is now due to be released in the middle of next year, Microsoft said.

 

[Source:- Infoworld]

 

Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

 

[Source:- Infoworld]

 

Report: Samsung set itself a huge shipments target for the Galaxy S8

Samsung apparently has big expectations regarding the upcoming Galaxy S8. According to a report from The Investor, Samsung wants to ship 60 million units of its flagship device in 2017, a big step up from the previous Galaxy S generations.

Samsung shipped 45 million units of the Galaxy S6 and 48 million Galaxy S7 devices. Increasing shipment by 25 percent to 60 million units means that the upcoming flagship will have to bring quite a few new and interesting features to the table that will spark consumer interest. Especially after the whole Note 7 fiasco, which didn’t do the company’s reputation any good.

The report also states that the rumors claiming the Galaxy S8 will be released a month later than its predecessor are true. The tech giant will start manufacturing the device in March and begin shipping it to retailers in mid-April.

Some market analysts don’t believe that Samsung will be able to achieve its ambitious goal of shipping 60 million units of its flagship device this year. The high-end smartphone market is getting more and more competitive, as we have seen a lot of great devices, like the OnePlus 3T, that offer a better price-performance ratio than Samsung’s Galaxy S series. The South Korean company is already taking a beating in India, where it’s losing ground against Chinese brands.

Keep in mind that these is just speculation for now. Samsung has proven many times that it knows how to make great phones that sell extremely well on the market. The Galaxy S8 just might surprise market analysts and achieve or even surpass its goal.

 

[Source:- Androidauthority]

Digital Offers: Never forget a password again for $15

What do Facebook, email, and your online banking information all have in common? They are all secured behind a log in username and password. Assuming you use a different password for each service, that end’s up being a lot of passwords to memorize, and chances are you will forget at least one of them at some point.

There are very few things in the world that are more frustrating than forgetting your password and it usually leaves you pounding on your keyboard or throwing your phone across the room. Window Central Offers can help you never forget a password ever again.

 

True Key by Intel Security is an award-winning password managing software that will allow you to improve your security and prevent you from forgetting your login information.

True Key uses biometrics to make you the password! You can use your fingerprints, eyes, or even your face to access your favorite websites and accounts, and right now Windows Central Offers can offer you a one-year license to True Key for only $15.99.

Just look at some of the other features you can get with rue Key by Intel Security:

  • Access from any device; True Key syncs to your phone, tablet ,and computer.
  • Verify access to the app with your face, fingerprint, or via devices you trust for total security.
  • Store and manage up to 10,000 passwords securely in the True Key app, which is accessible only by you via devices you’ve approved.
  • Sync passwords automatically to your phones, tablets, and computers for easy access on any approved device.

Don’t ever lose your password again. True Key by Intel Security will protect your security and give you peace of mind, and right now we can offer it to you for only $15.99.

 

 

 

[Source:- Windowscentral]

Here are a few easy steps to setup UPI apps on your phone

There’s no denying that demonetization has affected the public in a significant way. Thankfully, the government has provided various options for the customers to continue banking as usual with the help of easy-to-use mobile wallets as well as the newly launched UPI services. UPI stands for Unified Payments Interface, and a number of financial institutions have aligned with the project.

What can you do with UPI apps?
Well, you can send snappy payments via IMPS and even request for payments from your contacts, provided they are also using a UPI app on their smartphone. This is pretty much like a mobile wallet, but something that is inked directly with the bank.

One advantage with UPI apps is that even if you download an app from another bank, you will still be able to enter account details from your source bank without much fuss.

How to get started?
The apps are Android only for the time being, but an iPhone app is apparently on its way. Once you get the app of your choice (Yes Bank, ICICI Bank etc) on the Play Store, you will simply have to enter your mobile number that you have registered with the bank. This step will also ask you to create a new 4-digit PIN number, which is basically a password and will have to be used when users log in each time.

Following this process, you will have to create a new and unique VPA or virtual payment address. This will be used by others to send you money or identity your account. The VPA can be anything ranging from your name to the phone number.

With the VPA process out of the way, it’s now time to connect to your bank so that all your details are made visible. The transaction limit on UPI is capped at Rs 1,00,000, with the minimum being Rs 50.

To receive money from someone, you merely have to pick out the VPA name/address from your list and then request or schedule a payment. Bear in mind that you can only receive money when the user on the other end also has a UPI app.

 

 

[Source:- Techradre]

Azure Data Lake Analytics gets boost from U-SQL, a new SQL variantb

Image result for Azure Data Lake Analytics gets boost from U-SQL, a new SQL variant

The big data movement has frozen out many data professionals who are versed in SQL. Microsoft’s U-SQL programming language tries to get such folks back in the data querying game

One of the dirty little secrets of big data is that longtime data professionals have often been kept on the sidelines….

Hadoop, Spark and related application frameworks for big data rely more on Java programing skills and less on SQL skills, thus freezing out many SQL veterans — be they Microsoft T-SQL adepts or others.

While continuing its push into Azure cloud support for Hadoop, Hive, Spark,R and the like, Microsoft is looking to enable T-SQL users to join the big data experience as well.

Its answer is U-SQL, a dialect of T-SQL meant to handle disparate data, while supporting C# extensions and, in-turn, .NET libraries. It is presently available as part of a public preview of Microsoft’s Azure Data Lake Analytics cloud service, first released last October.

U-SQL is a language intended to support queries on all kinds of data, not just relational data. It is focused solely on enhancements to the SQL SELECT statement, and it automatically deploys code to run in parallel. U-SQL was outlined in detail by Microsoft this week at the Data Science Summit it held in conjunction with its Ignite 2016 conference in Atlanta.

Beyond Hive and Pig

The Hadoop community has looked to address this by adding SQL-oriented query engines and languages, such as Hive and Pig. But there was a need for something more akin to familiar T-SQL, according to Alex Whittles, founder of the Purple Frog Systems Ltd. data consultancy in Birmingham, England, and a Microsoft MVP.

“Many of the big data tools — for example, MapReduce — come from a Hadoop background, and they tend to require [advanced] Java coding skills. Tools like Hive and Pig are attempts to bridge that gap to try to make it easier for SQL developers,” he said.

But, “in functionality and mindset, the tools are from the programming world and are not too appropriate for people whose job it is to work very closely with a database,” Whittles said.

This is an important way to open up Microsoft’s big data systems to more data professionals, he said.

“U-SQL gives data people the access to a big data platform without requiring as much learning,” he said. That may be doubly important, he added, as Hive-SQL developers are still a small group, compared with the larger SQL army.

U-SQL is something of a differentiator for Azure Data Lake Analytics, according to Warner Chaves, SQL Server principal consultant with The Pythian Group Inc. in Ottawa and also a Microsoft MVP.

“The feedback I have gotten from database administrators is that big data has seemed intimidating, requiring you to deploy and manage Hadoop clusters and to learn a lot of tools, such as Pig, Hive and Spark,” he said. Some of those issues are handled by Microsoft’s Azure cloud deployment — others by U-SQL.

“With U-SQL, the learning curve for someone working in any SQL — not just T-SQL — is way smaller,” he said. “It has a low barrier to entry.”

He added that Microsoft’s scheme for pricing cloud analytics is also an incentive for its use. The Azure Data Lake itself is divided into separate analytics and storage modules, he noted, and users only have to pay for the analytics processing resources when they’re invoked.

More in store

While it looks out for its traditional T-SQL developer base, Microsoft is also pursuing enhanced capabilities for Hive in the Azure Data Lake.

This week at the Strata + Hadoop World conference in New York, technology partner Hortonworks Inc. released its version of an Apache Hive update using LLAP, or Live Long and Process, which uses in-memory and other architectural enhancements to speed Hive queries. It’s meant to work with Microsoft’s HDInsight, a Hortonworks-based Hadoop and big data platform that is another member of the Azure Data Lake Analytics family.

PRO+

Content

Find more PRO+ content and other member only offers, here.

  • E-Handbook

    Shining a light on SQL Server storage tactics

Meanwhile, there’s more in store for U-SQL. As an example, at Microsoft’s Data Science Summit, U-SQL driving force Michael Rys, a principal program manager at Microsoft, showed attendees how U-SQL can be extended, focusing on how queries in the R language can be exposed for use in U-SQL.

The R language has garnered more and more support within Microsoft since the company purchased Revolution Analytics in 2015. While R programmers dramatically lag SQL programmers in size of population, R is finding use in new analytics applications, including ones centered on machine learning.

 

[Source:- techtarget]

Oracle plans two major Java EE upgrades for the cloud

Oracle plans two major Java EE upgrades for the cloud

Modernizing Java EE (Enterprise Edition), the server-side version of Java, for the cloud and microservices will require two critical upgrades to the platform. Version 8 is set to arrive in late 2017, followed by Java EE 9 a year later, Oracle revealed on Sunday.

Although Java EE already in use in cloud deployments, Oracle sees a need to better equip it for this paradigm, said Anil Gaur, Oracle’s group vice president of engineering, at the JavaOne conference in San Francisco. To this end Java EE 8, which had already been mapped out, will receive two additional sets of capabilities: one for configuration of services, and the other for health checking to monitor and manage services communications.

Oracle will publish Java specification requests (JSRs), which are official amendments to the Java platform, detailing these two efforts. Java EE 8 had been scheduled to arrive by next summer, but the additions will push the release out several months. Java EE 8 also will be fitted with enhancements previously specified, such as ease of development.

The configuration specification will enable services to scale horizontally and help specify capabilities such as quality of service. These details will be maintained outside the application code itself so when the service expires, the configuration code is still there for use with a similar service, Gaur said. With the health service specification, a consistent set of APIs will be featured so that services can communicate the health of services and developers can specify what corrective measures may need to be taken.

Java EE 9, meanwhile, will foster deployment of smaller units of services, which can independently scale. Key-value store support for using databases such as MongoDB and Cassandra is planned, along with eventual consistency in transactions. Oracle also is exploring support for a server-less model, where code is taken care of in a runtime environment. A state service and multitenancy for tenant-aware routing and deployment will also be considered, along with security capabilities for OAuth and OpenID.

Java EE has been the subject of much debate in recent months with proponents upset over a perceived lack of direction for the platform. In response, Oracle first expressedits cloud intentions for Java EE in July. “Developers are facing new challenges as they start writing cloud-native applications, which are asynchronous in nature,” Gaur said.

Vendors have begun using Java EE APIs to solve these problems. But each vendor is doing it in its own way, with consistency lacking, Gaur said. With no standard way, it is impossible to ensure compatibility of these services.

Gaur also highlighted use of a reactive style of programming for building loosely coupled, large-scale distributed applications. Moving to the cloud requires migrating from a physical infrastructure to virtualization as well as a shift from monolithic applications, he said.

Also planned for Java EE is comprehensive support for HTTP/2 beyond the support that had already been planned for the Java servlet. A Docker model, enabling packaging of multiple services in a single container, is planned. Work on both Java EE 8 and Java EE 9 is proceeding in parallel, Gaur said.

Asked whether users might wait the extra year for Java EE 9 rather than first upgrading to Java EE 8, Gaur said that might be OK for some people. But others must move at “cloud speed” and need things more quickly, he said.

Multiple parties, including Red Hat and IBM, have been pondering their own improvements to Java EE, believing that Oracle had been neglecting it. But Oracle says its silence simply indicated it had been reflecting on what to do with Java EE.

Oracle on Sunday also detailed some intentions for Java SE, the standard edition of the platform, aside from what already has been specified for the upcoming Java SE 9 platform. Plans include making it easier for developers to deal with boilerplate code by making these code classes easier to read, said Brian Goetz, a Java language architect at Oracle.

The company also wants to expand the scope of type inferences which allow for removal of redundant code while maintaining the benefit of strong static typing, applying it to local variables. But Goetz noted that this did not mean Java was being turned into JavaScript here.

Also at JavaOne, Oracle announced its intention to soon distribute the Oracle JDK with Docker, the popular Linux container platform. “We want to make Java a first-class citizen for Docker and we want to do it with a distribution model that makes sense,” said Georges Saab, Oracle’s vice president of development. Java and Docker have not been strangers to each other previously, with Docker already popular as a mechanism for providing improved packaging.

 

 

[Source:- JW]