AI tools came out of the lab in 2016

Roboy angry robot

You shouldn’t anthropomorphize computers: They don’t like it.

That joke is at least as old as Deep Blue’s 1997 victory over then world chess champion Garry Kasparov, but even with the great strides made in the field of artificial intelligence over that time, we’re still not much closer to having to worry about computers’ feelings.

Computers can analyze the sentiments we express in social media, and project expressions on the face of robots to make us believe they are happy or angry, but no one seriously believes, yet, that they “have” feelings, that they can experience them.

Other areas of AI, on the other hand, have seen some impressive advances in both hardware and software in just the last 12 months.

Deep Blue was a world-class chess opponent — and also one that didn’t gloat when it won, or go off in a huff if it lost.

Until this year, though, computers were no match for a human at another board game, Go. That all changed in March when AlphaGo, developed by Google subsidiary DeepMind, beat Lee Sedol, then the world’s strongest Go player, 4-1 in a five-match tournament.

AlphaGo’s secret weapon was a technique called reinforcement learning, where a program figures out for itself which actions bring it closer to its goal, and reinforces those behaviors, without the need to be taught by a person which steps are correct. That meant that it could play repeatedly against itself and gradually learn which strategies fared better.

Reinforcement learning techniques have been around for decades, too, but it’s only recently that computers have had sufficient processing power (to test each possible path in turn) and memory (to remember which steps led to the goal) to play a high-level game of Go at a competitive speed.

Better performing hardware has moved AI forward in other ways too.

In May, Google revealed its TPU (Tensor Processing Unit), a hardware accelerator for its TensorFlow deep learning algorithm. The ASICs (application-specific integrated circuit) can execute the types of calculations used in machine learning much faster and using less power than even GPUs, and Google has installed several thousand of them in its server racks in the slots previously reserved for hard drives.

The TPU, it turns out, was one of the things that made AlphaGo so fast, but Google has also used the chip to accelerate mapping and navigation functions in Street View and to improve search results with a new AI tool called RankBrain.

Google is keeping its TPU to itself for now, but others are releasing hardware tuned for AI applications. Microsoft, for example, has equipped some of its Azure servers with FPGAs (field-programmable gate arrays) to accelerate certain machine learning functions, while IBM is targeting similar applications with a range of PowerAI servers that use custom hardware to link its Power CPUs with Nvidia GPUs.

For businesses that want to deploy cutting-edge AI technologies without developing everything from scratch themselves, easy access to high-performance hardware is a start, but not enough. Cloud operators recognize that, and are also offering AI software as a service. Amazon Web Services and Microsoft’s Azure have both added machine learning APIs, while IBM is building a business around cloud access to its Watson AI.

The fact that these hardware and software tools are cloud-based will help AI systems in other ways too.

Being able to store and process enormous volumes of data is only useful to the AI that has access to vast quantities of data from which to learn — data such as that collected and delivered by cloud services, for example, whether its information about the weather, mail order deliveries, requests for rides or peoples’ tweets.

Access to all that raw data, rather than the minute subset, processed and labelled by human trainers, that was available to previous generations of AIs, is one of the biggest factors transforming AI research today, according to a Stanford University study of the next 100 years in AI.

And while having computers watch everything we do, online and off, in order to learn how to work with us might seem creepy, it’s really only in our minds. The computers don’t feel anything. Yet.

 

[Source:- JW]

 

Oracle survey: Java EE users want REST, HTTP/2

Oracle survey: Java EE users want REST, HTTP/2

In September and October, Oracle asked Java users to rank future Java EE enhancements by importance. The survey’s 1700 participants put REST services and HTTP/2 as top priorities, followed by Oauth and OpenID, eventing, and JSON-B (Java API for JSON Binding).

“REST (JAX-RS 2.1) and HTTP/2 (Servlet 4.0) have been voted as the two most important technologies surveyed, and together with JSON-B represent three of the top six technologies,” a report on the survey concludes. “Much of the new API work in these technologies for Java EE 8 is already complete. There is significant value in delivering Java EE 8 with these technologies, and the related JSON-P (JSON with Padding) updates, as soon as possible.”

Oracle is pursuing Java EE 8 as a retooled version of the platform geared to cloud and microservices deployments. It’s due in late-2017, and a follow-up release, Java EE 9, is set to appear a year later.

Based on the survey, Oracle considered accelerating Java EE standards for OAuth and OpenID Connect. “This could not be accomplished in the Java EE 8 timeframe, but we’ll continue to pursue Security 1.0 for Java EE 8,” the company said. But two other technologies that ranked high in the survey, configuration and health-checking, will be postponed. “We have concluded it is best to defer inclusion of these technologies in Java EE in order to complete Java EE 8 as soon as possible.”

Management, JMS (Java Message Service), and MVC ranked low, thus supporting Oracle’s plans to withdraw new APIs for these areas from Java EE 8. While, CDI (Contexts and Dependency Injection) 2.0, Bean Validation 2.0, and JSF (JavaServer Faces) 2.3 were not directly surveyed, Oracle has made significant progress on them and will include them in Java EE 8.

JAX-RS (Java API for RESTful Web Services) drew a lot of support for use with cloud and microservices applications, with 1,171 respondents rating it as very important. “The current practice of cloud development in Java is largely based on REST and asynchrony,” the report said. “For Java developers, that means using the standard JAX-RS API. Suggested enhancements coming to the next version of JAX-RS include: a reactive client API, non-blocking I/O support, server-sent events and better CDI integration.” HTTP/2, a protocol for more efficient use of network resources and reduced latency, was rated very important by 1,037 respondents when it comes to cloud and microservices applications.

Respondents also supported the reactive style of programming for the next generation of cloud and microservices, with 647 calling it very important, and eventing, for cloud and microservices applications, was favored by 769 respondents. “Many cloud applications are moving from a synchronous invocation model to an asynchronous event-driven model,” Oracle said. “Key Java EE APIs could support this model for interacting with cloud services. A common eventing system would simplify the implementation of such services.”

In other findings, eventual consistency for cloud and microservices applications was favored by 514 respondents who found it very important and 468 who found it important. Multi-tenancy, critical to cloud deployments, was rated very important by 377 respondents and important by 390 survey takers. JSON-P was rated as very important by 576 respondents, while 781 gave this same rating to JSON-B. Standardizing NoSQL database support for cloud and microservices applications was rated very important by 489 respondents and important by 373 of those surveyed, and  582 respondents thought it was very important that Java EE 9 investigate the modularization of EE containers.

The greatest number of the survey’s respondents — more than 700 — had more than eight years’ experiences developing with Java EE, while 680 had from two to eight years of experience.

 

 

[Source:- JW]

Apache Beam unifies batch and streaming for big data

Apache Beam unifies batch and streaming for big data

Apache Beam, a unified programming model for both batch and streaming data, has graduated from the Apache Incubator to become a top-level Apache project.

Aside from becoming another full-fledged widget in the ever-expanding Apache tool belt of big-data processing software, Beam addresses ease of use and dev-friendly abstraction, rather than simply offering raw speed or a wider array of included processing algorithms.

Beam us up!

Beam provides a single programming model for creating batch and stream processing jobs (the name is a hybrid of “batch” and “stream”), and it offers a layer of abstraction for dispatching to various engines used to run the jobs. The project originated at Google, where it’s currently a service called GCD (Google Cloud Dataflow). Beam uses the same API as GCD, and it can use GCD as an execution engine, along with Apache Spark, Apache Flink (a stream processing engine with a highly memory-efficient design), and now Apache Apex (another stream engine for working closely with Hadoop deployments).

The Beam model involves five components: the pipeline (the pathway for data through the program); the “PCollections,” or data streams themselves; the transforms, for processing data; the sources and sinks, where data is fetched and eventually sent; and the “runners,” or components that allow the whole thing to be executed on an engine.

Apache says it separated concerns in this fashion so that Beam can “easily and intuitively express data processing pipelines for everything from simple batch-based data ingestion to complex event-time-based stream processing.” This is in line with reworking tools like Apache Spark to support stream and batch processing within the same product and with similar programming models. In theory, it’s one fewer concept for prospective developers to wrap their head around, but that presumes Beam is used in lieu of Spark or other frameworks, when it’s more likely it’ll be used — at first — to augment them.

Hands off

One possible drawback to Beam’s approach is that while the layers of abstraction in the product make operations easier, they also put the developer at a distance from the underlying layers. A good case in point: Beam’s current level of integration with Apache Spark; the Spark runner doesn’t yet use Spark’s more recent DataFrames system, and thus may not take advantage of the optimizations those can provide. But this isn’t a conceptual flaw, it’s an issue with the implementation, which can be addressed in time.

The big payoff of using Beam, as noted by Ian Pointer in his discussion of Beam in early 2016, is that it makes migrations between processing systems less of a headache. Likewise, Apache says Beam “cleanly [separates] the user’s processing logic from details of the underlying engine.”

Separation of concern and ease of migration will be good to have if the ongoing rivalries, and competitions between the various big data processing engines continues. Granted, Apache Spark has emerged as one of the undisputed champs of the field and become a de facto standard choice. But there’s always room for improvement or an entirely new streaming or processing paradigm. Beam is less about offering a specific alternative than about providing developers and data-wranglers with more breadth of choice between them.

 

 

[Source:- Javaworld]

New framework uses Kubernetes to deliver serverless app architecture

New framework uses Kubernetes to deliver serverless app architecture

A new framework built atop Kubernetes is the latest project to offer serverless or AWS Lambda-style application architecture on your own hardware or in a Kubernetes-as-a-service offering.

The Fission framework keeps the details about Docker and Kubernetes away from developers, allowing them to concentrate on the software rather than the infrastructure. It’s another example of Kubernetes becoming a foundational technology.

Some assembly, but little container knowledge, required

Written in Go and created by managed-infrastructure provider Platform9, Fission works in conjunction with any Kubernetes cluster. Developers write functions that use Fission’s API, much the same as they would for AWS Lambda. Each function runs in what’s called an environment, essentially a package for the language runtime. Triggers are used to map functions to events; HTTP routes are one common trigger.

Fission lets users effortlessly leverage Kubernetes and Docker to run applications on it. Developers don’t need to know intimate details about Docker or Kubernetes simply to ensure the application can run well. Likewise, developers don’t have to build app containers, but they can always use a prebuilt container if needed, especially if the app is larger and more complex than a single function can encapsulate.

Fission’s design allows applications to be highly responsive to triggers. When launched, Fission creates a pool of “prewarmed” containers ready to receive functions. According to Fission’s developers, this means an average of 100 milliseconds for the “cold start” of an application, although that figure will likely be dependent on the deployment and the hardware.

We’re just getting warmed up!

A few good clues indicate what Fission’s developers want to do with the project in the future. For one, the plan includes being as language- and runtime-agnostic as possible. Right now the only environments (read: runtimes) that ship with Fission are for Node.js and Python, but new ones can be added as needed, and existing ones can be modified freely. “An environment is essentially just a container with a web server and dynamic loader,” explains Fission’s documentation.

Another currently underdeveloped area that will be expanded in future releases: The variety of triggers available to Fission. Right now, HTTP routes are the only trigger type that can be used, but plans are on the table to add other triggers, such as Kubernetes events.

 

 

[Source:- Javaworld]

New JVM language stands apart from Scala, Clojure

New JVM language stands apart from Scala, Clojure

Another JVM language, Haskell dialect Eta, has arrived on the scene, again centering on functional programming.

Intended for building scalable systems, Eta is a strongly typed functional language. It’s similar to Scala, a JVM language that also emphasizes functional programming and scalability, and Clojure, another a functional language on the JVM.

But Eta sets itself apart from such competitors because it’s immutable by default, it uses lazy evaluation, and it has a very powerful type system, said Eta founder Rahul Muttineni, CTO at TypeLead, which oversees the language. This combination allows static guarantees and conciseness simply not possible in Scala or Clojure.

Currently at version 0.0.5 in an alpha release, Eta is interoperable with Java, allowing reuse of Java libraries in Eta projects and use of Eta modules in Java. Strong type safety enables developers to tell the compiler more information about code, while immutability in Eta boosts concurrency.

Eta also features purity, in which calling a function with the same arguments yields the same results each time; function definitions are treated as equations and substitutions can be performed like in math. Eta proponents said this makes it easier to understand code and prevents a lot of bugs typical in imperative languages. “Purity allows you to treat your code like equations in mathematics and makes it a lot easier to reason about your code, especially in concurrency and parallelism settings,” Muttineni said.

Eta is “lazy by default,” meaning data stays in an unevaluated state until a function needs to see inside. This lets developers program without having to be concerned about whether they have done more computation than was required. Developers also can write multipass algorithms in a single pass. “Laziness allows you to stop worrying about the order in which you write your statements,” said Muttineni. “Just specify the data dependencies by defining expressions and their relationships to each other, and the compiler will execute them in the right order and only if the expressions are needed.”

Plans call for fitting Eta with a concurrent runtime, an interactive REPL, metaprogramming, massive parallelism, and transactional concurrency. Support for the Maven build manager and a core library are in development as well, along with boilerplate generation for Java Foreign Function Interface imports.

 

 

[Source:- Javaworld]

Oracle survey: Java EE users want REST, HTTP/2

Oracle survey: Java EE users want REST, HTTP/2

In September and October, Oracle asked Java users to rank future Java EE enhancements by importance. The survey’s 1700 participants put REST services and HTTP/2 as top priorities, followed by Oauth and OpenID, eventing, and JSON-B (Java API for JSON Binding).

“REST (JAX-RS 2.1) and HTTP/2 (Servlet 4.0) have been voted as the two most important technologies surveyed, and together with JSON-B represent three of the top six technologies,” a report on the survey concludes. “Much of the new API work in these technologies for Java EE 8 is already complete. There is significant value in delivering Java EE 8 with these technologies, and the related JSON-P (JSON with Padding) updates, as soon as possible.”

Oracle is pursuing Java EE 8 as a retooled version of the platform geared to cloud and microservices deployments. It’s due in late-2017, and a follow-up release, Java EE 9, is set to appear a year later.

Based on the survey, Oracle considered accelerating Java EE standards for OAuth and OpenID Connect. “This could not be accomplished in the Java EE 8 timeframe, but we’ll continue to pursue Security 1.0 for Java EE 8,” the company said. But two other technologies that ranked high in the survey, configuration and health-checking, will be postponed. “We have concluded it is best to defer inclusion of these technologies in Java EE in order to complete Java EE 8 as soon as possible.”

Management, JMS (Java Message Service), and MVC ranked low, thus supporting Oracle’s plans to withdraw new APIs for these areas from Java EE 8. While, CDI (Contexts and Dependency Injection) 2.0, Bean Validation 2.0, and JSF (JavaServer Faces) 2.3 were not directly surveyed, Oracle has made significant progress on them and will include them in Java EE 8.

JAX-RS (Java API for RESTful Web Services) drew a lot of support for use with cloud and microservices applications, with 1,171 respondents rating it as very important. “The current practice of cloud development in Java is largely based on REST and asynchrony,” the report said. “For Java developers, that means using the standard JAX-RS API. Suggested enhancements coming to the next version of JAX-RS include: a reactive client API, non-blocking I/O support, server-sent events and better CDI integration.” HTTP/2, a protocol for more efficient use of network resources and reduced latency, was rated very important by 1,037 respondents when it comes to cloud and microservices applications.

Respondents also supported the reactive style of programming for the next generation of cloud and microservices, with 647 calling it very important, and eventing, for cloud and microservices applications, was favored by 769 respondents. “Many cloud applications are moving from a synchronous invocation model to an asynchronous event-driven model,” Oracle said. “Key Java EE APIs could support this model for interacting with cloud services. A common eventing system would simplify the implementation of such services.”

In other findings, eventual consistency for cloud and microservices applications was favored by 514 respondents who found it very important and 468 who found it important. Multi-tenancy, critical to cloud deployments, was rated very important by 377 respondents and important by 390 survey takers. JSON-P was rated as very important by 576 respondents, while 781 gave this same rating to JSON-B. Standardizing NoSQL database support for cloud and microservices applications was rated very important by 489 respondents and important by 373 of those surveyed, and  582 respondents thought it was very important that Java EE 9 investigate the modularization of EE containers.

The greatest number of the survey’s respondents — more than 700 — had more than eight years’ experiences developing with Java EE, while 680 had from two to eight years of experience.

 

[Source:- Javaworld]

 

Developers pick up new Git code-hosting option

Developers pick up new Git code-hosting option

Developers are gaining another option for Git code-hosting with Gitea, a lightweight, self-hosted platform.

Offered as open source under an MIT license, Gitea aims to be the easiest, fastest, and most painless way of setting up a self-hosted Git service, the project’s GitHub repo states. A community-managed fork of Gogs, for hosting a Git service, Gitea was written in Go and can be compiled for Windows, Linux, and MacOS. It will run on Intel, AMD, PowerPC, and ARM processors.

Gitea offers a solution for private repos, Rémy Boulanouar, a maintainer of Gitea, said. “For my own usage, I have dozens of project stored in Git in my personal laptop. I don’t want to share them with everybody and don’t want to pay to have private repositories of GitHub,” he said. “I used BitBucket a while ago to have [a] free private repository, but since I have a personal server at home, I wanted to store everything on it. Gitea is the perfect match for me: free, fast, and small.”

Proponents bill Gitea as easy to install, with users either able to run the binary or ship Gitea with Docker or Vagrant to package it. Gitea went to a 1.0.0 release in late December. “I wanted to have a GitHub-like [platform] in my own server but didn’t wanted to install the huge GitLab,” Boulanouar said. “I found Gogs during my search and wanted to make it really close to GitHub. I saw some missing feature and learned Go just for that.”

 

 

[Source:- Javaworld]

Department of Labor sues Google over wage data

Google's Mountain View, California headquarters

The U.S. Department of Labor has filed a lawsuit against Google, with the company’s ability to win government contracts at risk.

The agency is seeking what it calls “routine” information about wages and the company’s equal opportunity program. The agency filed a lawsuit with its Office of Administrative Law Judges to gain access to the information, it announced Wednesday.

Google, as a federal contractor, is required to provide the data as part of a compliance check by the agency’s Office of Federal Contract Compliance Programs (OFCCP), according to the Department of Labor. The inquiry is focused on Google’s compliance with equal employment laws, the agency said.

“Like other federal contractors, Google has a legal obligation to provide relevant information requested in the course of a routine compliance evaluation,” OFCCP Acting Director Thomas Dowd said in a press release. “Despite many opportunities to produce this information voluntarily, Google has refused to do so.”

Google said it’s provided hundreds of thousands of records to the agency over the past year, including some related to wages. However, a handful of OFCCP data requests were “overbroad” or would reveal confidential data, the company said in a statement.

“We’ve made this clear to the OFCCP, to no avail,” the statement added. “These requests include thousands of employees’ private contact information which we safeguard rigorously.”

Google must allow the federal government to inspect and copy records relevant to compliance, the Department of Labor said. The agency requested the information in September 2015, but Google provided only partial responses, an agency spokesman said by email.

 

 

[Source:- Javaworld]

IBM: Next 5 years AI, IoT and nanotech will literally change the way we see the world

artificial intelligence AI machine learning brain circuit

Perhaps the coolest thing about IBM’s 9th “Five Innovations that will Help Change our Lives within Five Years” predictions is that none of them sound like science fiction.

“With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years,” said Dario Gil, vice president of science & solutions at IBM Research in a statement.

Among the five areas IBM sees as being key in the next five years include artificial intelligence, hyperimaging and small sensors. Specifically, according to IBM:

1. In five years, what we say and write will be used as indicators of our mental health and physical wellbeing. Patterns in our speech and writing analyzed by new cognitive systems will provide tell-tale signs of early-stage mental and neurological diseases that can help doctors and patients better predict, monitor and track these diseases. At IBM, scientists are using transcripts and audio inputs from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression.

Today, it only takes about 300 words to help clinicians predict the probability of psychosis in a user. Cognitive computers can analyze a patient’s speech or written words to look for tell-tale indicators found in language, including meaning, syntax and intonation. Combing the results of these measurements with those from wearables devices and imaging systems (MRIs and EEGs) can paint a more complete picture of the individual for health professionals to better identify, understand and treat the underlying disease.

2. In five years, new imaging devices using hyperimaging technology and AI will help us see broadly beyond the domain of visible light by combining multiple bands of the electromagnetic spectrum to reveal valuable insights or potential dangers that would otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and accessible, so superhero vision can be part of our everyday experiences.

A view of the invisible or vaguely visible physical phenomena all around us could help make road and traffic conditions clearer for drivers and self-driving cars. For example, using millimeter wave imaging, a camera and other sensors, hyperimaging technology could help a car see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead and its distance and size. Embedded in our phones, these same technologies could take images of our food to show its nutritional value or whether it’s safe to eat. A hyperimage of a pharmaceutical drug or a bank check could tell us what’s fraudulent and what’s not.

3. In the next five years, new medical labs on a chip will serve as nanotechnology health detectives– tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyze a disease that would normally be carried out in a full-scale biochemistry lab.

The lab-on-a-chip technology could ultimately be packaged in a convenient handheld device to let people quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids, sending this information streaming into the cloud from the convenience of their home. There it could be combined with data from other IoT-enabled devices, like sleep monitors and smart watches, and analyzed by AI systems for insights. When taken together, this data set will give us an in-depth view of our health and alert us to the first signs of trouble, helping to stop disease before it progresses.

4. In five years, new, affordable sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of the vast natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events. Scientists at IBM are working with natural gas producers such as Southwestern Energy to explore the development of an intelligent methane monitoring system and as part of the ARPA-E Methane Observation Networks with Innovative Technology to Obtain Reductions (MONITOR) program.

5. In five years, we will use machine-learning algorithms and software to help us organize the information about the physical world to help bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a “macroscope” – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth’s complex data together to analyze it for meaning.

By aggregating, organizing and analyzing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies. Beyond our own planet, macroscope technologies could handle, for example, the complicated indexing and correlation of various layers and volumes of data collected by telescopes to predict asteroid collisions with one another and learn more about their composition.

 

IBM has had some success with its “five in five” predictions in the past. For example, in 2012 it predicted computers will have a sense of smell. IBM says “sniffing” technology is already in use at the Metropolitan Museum of Art, working to preserve and protect priceless works of art by monitoring fluctuations in temperature, relative humidity, and other environmental conditions. “And this same technology is also being used in the agricultural industry to monitor soil conditions, allowing farmers to better schedule irrigation and fertilization schedules, saving water and improving crop yield,” IBM said.

 

In 2009 it had an expectation that buildings will sense and respond like living organisms. IBM said it is working with The U.S. General Services Administration (GSA) to develop and install advanced smart building technology in 50 of the federal government’s highest energy-consuming buildings. “Part of GSA’s larger smart building strategy, this initiative connects building management systems to a central cloud-based platform, improving efficiency and saving up to $15 million in taxpayer dollars annually. IBM is also helping the second largest school district in the U.S. become one of the greenest and most sustainable by making energy conservation and cost savings as easy as sending a text message,” IBM stated.

 

 

[Source:- Javaworld]

 

Lift language opens the door to cross-platform parallelism

Lift language opens the door to cross-platform parallelism

Wouldn’t it be great to write code that runs high-speed parallel algorithms on just about every kind of hardware out there, and without needing to be hand-tweaked to run well on GPUs versus CPUs?

That’s the promise behind a new project being developed by professors and students from the University of Edinburgh and the University of Münster, with support from Google. Together they’re proposing a new open source functional language, called “Lift,” for writing algorithms that run in parallel across a wide variety of hardware.

Lift creates code for OpenCL, a programming system designed to target CPUs, GPUs, and FPGAs alike as well as to automatically generate optimizations for each of those hardware types.

OpenCL can be optimized “by hand” to improve performance in different environments — on a GPU versus a regular CPU, for instance. Unfortunately, those optimizations aren’t portable across hardware types, and code has to be optimized for CPUs and GPUs separately. In some cases, OpenCL code optimized for GPUs won’t even work at all on a CPU. Worse, the optimizations in question are tedious to implement by hand.

Lift is meant to work around all this. In language-hacker terms, Lift is what’s called an “intermediate language,” or IL. According to a paper that describes the language’s concepts, it’s intended to allow the programmer to write OpenCL code by way of high-level abstractions that map to OpenCL concepts. It’s also possible for users to manually declare functions written in “a subset of the C language operating on non-array data types.”

When Lift code is compiled to OpenCL, it’s automatically optimized by iterating through many possible versions of the code, then testing their actual performance. This way, the results are not optimized in the abstract, as it were, for some given piece of hardware. Instead, they’re based on the actual performance of that algorithm.

One stated advantage for targeting multiple hardware architectures with a given algorithm is it allows the same distributioned application to run on a wider variety of hardware, and to take advantage of heterogenous architectures. If you have a system that has a mix of CPU, GPU, and FPGA hardware or two different kinds of GPU, the same application can in theory take advantage of all of those resources simultaneously. The end result is easier to deploy, since it’s not confined to any one kind of setup.

 

 

[Source:- Javaworld]