New framework uses Kubernetes to deliver serverless app architecture

New framework uses Kubernetes to deliver serverless app architecture

A new framework built atop Kubernetes is the latest project to offer serverless or AWS Lambda-style application architecture on your own hardware or in a Kubernetes-as-a-service offering.

The Fission framework keeps the details about Docker and Kubernetes away from developers, allowing them to concentrate on the software rather than the infrastructure. It’s another example of Kubernetes becoming a foundational technology.

Some assembly, but little container knowledge, required

Written in Go and created by managed-infrastructure provider Platform9, Fission works in conjunction with any Kubernetes cluster. Developers write functions that use Fission’s API, much the same as they would for AWS Lambda. Each function runs in what’s called an environment, essentially a package for the language runtime. Triggers are used to map functions to events; HTTP routes are one common trigger.

Fission lets users effortlessly leverage Kubernetes and Docker to run applications on it. Developers don’t need to know intimate details about Docker or Kubernetes simply to ensure the application can run well. Likewise, developers don’t have to build app containers, but they can always use a prebuilt container if needed, especially if the app is larger and more complex than a single function can encapsulate.

Fission’s design allows applications to be highly responsive to triggers. When launched, Fission creates a pool of “prewarmed” containers ready to receive functions. According to Fission’s developers, this means an average of 100 milliseconds for the “cold start” of an application, although that figure will likely be dependent on the deployment and the hardware.

We’re just getting warmed up!

A few good clues indicate what Fission’s developers want to do with the project in the future. For one, the plan includes being as language- and runtime-agnostic as possible. Right now the only environments (read: runtimes) that ship with Fission are for Node.js and Python, but new ones can be added as needed, and existing ones can be modified freely. “An environment is essentially just a container with a web server and dynamic loader,” explains Fission’s documentation.

Another currently underdeveloped area that will be expanded in future releases: The variety of triggers available to Fission. Right now, HTTP routes are the only trigger type that can be used, but plans are on the table to add other triggers, such as Kubernetes events.

 

 

[Source:- Javaworld]

New JVM language stands apart from Scala, Clojure

New JVM language stands apart from Scala, Clojure

Another JVM language, Haskell dialect Eta, has arrived on the scene, again centering on functional programming.

Intended for building scalable systems, Eta is a strongly typed functional language. It’s similar to Scala, a JVM language that also emphasizes functional programming and scalability, and Clojure, another a functional language on the JVM.

But Eta sets itself apart from such competitors because it’s immutable by default, it uses lazy evaluation, and it has a very powerful type system, said Eta founder Rahul Muttineni, CTO at TypeLead, which oversees the language. This combination allows static guarantees and conciseness simply not possible in Scala or Clojure.

Currently at version 0.0.5 in an alpha release, Eta is interoperable with Java, allowing reuse of Java libraries in Eta projects and use of Eta modules in Java. Strong type safety enables developers to tell the compiler more information about code, while immutability in Eta boosts concurrency.

Eta also features purity, in which calling a function with the same arguments yields the same results each time; function definitions are treated as equations and substitutions can be performed like in math. Eta proponents said this makes it easier to understand code and prevents a lot of bugs typical in imperative languages. “Purity allows you to treat your code like equations in mathematics and makes it a lot easier to reason about your code, especially in concurrency and parallelism settings,” Muttineni said.

Eta is “lazy by default,” meaning data stays in an unevaluated state until a function needs to see inside. This lets developers program without having to be concerned about whether they have done more computation than was required. Developers also can write multipass algorithms in a single pass. “Laziness allows you to stop worrying about the order in which you write your statements,” said Muttineni. “Just specify the data dependencies by defining expressions and their relationships to each other, and the compiler will execute them in the right order and only if the expressions are needed.”

Plans call for fitting Eta with a concurrent runtime, an interactive REPL, metaprogramming, massive parallelism, and transactional concurrency. Support for the Maven build manager and a core library are in development as well, along with boilerplate generation for Java Foreign Function Interface imports.

 

 

[Source:- Javaworld]

Oracle survey: Java EE users want REST, HTTP/2

Oracle survey: Java EE users want REST, HTTP/2

In September and October, Oracle asked Java users to rank future Java EE enhancements by importance. The survey’s 1700 participants put REST services and HTTP/2 as top priorities, followed by Oauth and OpenID, eventing, and JSON-B (Java API for JSON Binding).

“REST (JAX-RS 2.1) and HTTP/2 (Servlet 4.0) have been voted as the two most important technologies surveyed, and together with JSON-B represent three of the top six technologies,” a report on the survey concludes. “Much of the new API work in these technologies for Java EE 8 is already complete. There is significant value in delivering Java EE 8 with these technologies, and the related JSON-P (JSON with Padding) updates, as soon as possible.”

Oracle is pursuing Java EE 8 as a retooled version of the platform geared to cloud and microservices deployments. It’s due in late-2017, and a follow-up release, Java EE 9, is set to appear a year later.

Based on the survey, Oracle considered accelerating Java EE standards for OAuth and OpenID Connect. “This could not be accomplished in the Java EE 8 timeframe, but we’ll continue to pursue Security 1.0 for Java EE 8,” the company said. But two other technologies that ranked high in the survey, configuration and health-checking, will be postponed. “We have concluded it is best to defer inclusion of these technologies in Java EE in order to complete Java EE 8 as soon as possible.”

Management, JMS (Java Message Service), and MVC ranked low, thus supporting Oracle’s plans to withdraw new APIs for these areas from Java EE 8. While, CDI (Contexts and Dependency Injection) 2.0, Bean Validation 2.0, and JSF (JavaServer Faces) 2.3 were not directly surveyed, Oracle has made significant progress on them and will include them in Java EE 8.

JAX-RS (Java API for RESTful Web Services) drew a lot of support for use with cloud and microservices applications, with 1,171 respondents rating it as very important. “The current practice of cloud development in Java is largely based on REST and asynchrony,” the report said. “For Java developers, that means using the standard JAX-RS API. Suggested enhancements coming to the next version of JAX-RS include: a reactive client API, non-blocking I/O support, server-sent events and better CDI integration.” HTTP/2, a protocol for more efficient use of network resources and reduced latency, was rated very important by 1,037 respondents when it comes to cloud and microservices applications.

Respondents also supported the reactive style of programming for the next generation of cloud and microservices, with 647 calling it very important, and eventing, for cloud and microservices applications, was favored by 769 respondents. “Many cloud applications are moving from a synchronous invocation model to an asynchronous event-driven model,” Oracle said. “Key Java EE APIs could support this model for interacting with cloud services. A common eventing system would simplify the implementation of such services.”

In other findings, eventual consistency for cloud and microservices applications was favored by 514 respondents who found it very important and 468 who found it important. Multi-tenancy, critical to cloud deployments, was rated very important by 377 respondents and important by 390 survey takers. JSON-P was rated as very important by 576 respondents, while 781 gave this same rating to JSON-B. Standardizing NoSQL database support for cloud and microservices applications was rated very important by 489 respondents and important by 373 of those surveyed, and  582 respondents thought it was very important that Java EE 9 investigate the modularization of EE containers.

The greatest number of the survey’s respondents — more than 700 — had more than eight years’ experiences developing with Java EE, while 680 had from two to eight years of experience.

 

[Source:- Javaworld]

 

Developers pick up new Git code-hosting option

Developers pick up new Git code-hosting option

Developers are gaining another option for Git code-hosting with Gitea, a lightweight, self-hosted platform.

Offered as open source under an MIT license, Gitea aims to be the easiest, fastest, and most painless way of setting up a self-hosted Git service, the project’s GitHub repo states. A community-managed fork of Gogs, for hosting a Git service, Gitea was written in Go and can be compiled for Windows, Linux, and MacOS. It will run on Intel, AMD, PowerPC, and ARM processors.

Gitea offers a solution for private repos, Rémy Boulanouar, a maintainer of Gitea, said. “For my own usage, I have dozens of project stored in Git in my personal laptop. I don’t want to share them with everybody and don’t want to pay to have private repositories of GitHub,” he said. “I used BitBucket a while ago to have [a] free private repository, but since I have a personal server at home, I wanted to store everything on it. Gitea is the perfect match for me: free, fast, and small.”

Proponents bill Gitea as easy to install, with users either able to run the binary or ship Gitea with Docker or Vagrant to package it. Gitea went to a 1.0.0 release in late December. “I wanted to have a GitHub-like [platform] in my own server but didn’t wanted to install the huge GitLab,” Boulanouar said. “I found Gogs during my search and wanted to make it really close to GitHub. I saw some missing feature and learned Go just for that.”

 

 

[Source:- Javaworld]

Department of Labor sues Google over wage data

Google's Mountain View, California headquarters

The U.S. Department of Labor has filed a lawsuit against Google, with the company’s ability to win government contracts at risk.

The agency is seeking what it calls “routine” information about wages and the company’s equal opportunity program. The agency filed a lawsuit with its Office of Administrative Law Judges to gain access to the information, it announced Wednesday.

Google, as a federal contractor, is required to provide the data as part of a compliance check by the agency’s Office of Federal Contract Compliance Programs (OFCCP), according to the Department of Labor. The inquiry is focused on Google’s compliance with equal employment laws, the agency said.

“Like other federal contractors, Google has a legal obligation to provide relevant information requested in the course of a routine compliance evaluation,” OFCCP Acting Director Thomas Dowd said in a press release. “Despite many opportunities to produce this information voluntarily, Google has refused to do so.”

Google said it’s provided hundreds of thousands of records to the agency over the past year, including some related to wages. However, a handful of OFCCP data requests were “overbroad” or would reveal confidential data, the company said in a statement.

“We’ve made this clear to the OFCCP, to no avail,” the statement added. “These requests include thousands of employees’ private contact information which we safeguard rigorously.”

Google must allow the federal government to inspect and copy records relevant to compliance, the Department of Labor said. The agency requested the information in September 2015, but Google provided only partial responses, an agency spokesman said by email.

 

 

[Source:- Javaworld]

IBM: Next 5 years AI, IoT and nanotech will literally change the way we see the world

artificial intelligence AI machine learning brain circuit

Perhaps the coolest thing about IBM’s 9th “Five Innovations that will Help Change our Lives within Five Years” predictions is that none of them sound like science fiction.

“With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years,” said Dario Gil, vice president of science & solutions at IBM Research in a statement.

Among the five areas IBM sees as being key in the next five years include artificial intelligence, hyperimaging and small sensors. Specifically, according to IBM:

1. In five years, what we say and write will be used as indicators of our mental health and physical wellbeing. Patterns in our speech and writing analyzed by new cognitive systems will provide tell-tale signs of early-stage mental and neurological diseases that can help doctors and patients better predict, monitor and track these diseases. At IBM, scientists are using transcripts and audio inputs from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression.

Today, it only takes about 300 words to help clinicians predict the probability of psychosis in a user. Cognitive computers can analyze a patient’s speech or written words to look for tell-tale indicators found in language, including meaning, syntax and intonation. Combing the results of these measurements with those from wearables devices and imaging systems (MRIs and EEGs) can paint a more complete picture of the individual for health professionals to better identify, understand and treat the underlying disease.

2. In five years, new imaging devices using hyperimaging technology and AI will help us see broadly beyond the domain of visible light by combining multiple bands of the electromagnetic spectrum to reveal valuable insights or potential dangers that would otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and accessible, so superhero vision can be part of our everyday experiences.

A view of the invisible or vaguely visible physical phenomena all around us could help make road and traffic conditions clearer for drivers and self-driving cars. For example, using millimeter wave imaging, a camera and other sensors, hyperimaging technology could help a car see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead and its distance and size. Embedded in our phones, these same technologies could take images of our food to show its nutritional value or whether it’s safe to eat. A hyperimage of a pharmaceutical drug or a bank check could tell us what’s fraudulent and what’s not.

3. In the next five years, new medical labs on a chip will serve as nanotechnology health detectives– tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyze a disease that would normally be carried out in a full-scale biochemistry lab.

The lab-on-a-chip technology could ultimately be packaged in a convenient handheld device to let people quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids, sending this information streaming into the cloud from the convenience of their home. There it could be combined with data from other IoT-enabled devices, like sleep monitors and smart watches, and analyzed by AI systems for insights. When taken together, this data set will give us an in-depth view of our health and alert us to the first signs of trouble, helping to stop disease before it progresses.

4. In five years, new, affordable sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of the vast natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events. Scientists at IBM are working with natural gas producers such as Southwestern Energy to explore the development of an intelligent methane monitoring system and as part of the ARPA-E Methane Observation Networks with Innovative Technology to Obtain Reductions (MONITOR) program.

5. In five years, we will use machine-learning algorithms and software to help us organize the information about the physical world to help bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a “macroscope” – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth’s complex data together to analyze it for meaning.

By aggregating, organizing and analyzing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies. Beyond our own planet, macroscope technologies could handle, for example, the complicated indexing and correlation of various layers and volumes of data collected by telescopes to predict asteroid collisions with one another and learn more about their composition.

 

IBM has had some success with its “five in five” predictions in the past. For example, in 2012 it predicted computers will have a sense of smell. IBM says “sniffing” technology is already in use at the Metropolitan Museum of Art, working to preserve and protect priceless works of art by monitoring fluctuations in temperature, relative humidity, and other environmental conditions. “And this same technology is also being used in the agricultural industry to monitor soil conditions, allowing farmers to better schedule irrigation and fertilization schedules, saving water and improving crop yield,” IBM said.

 

In 2009 it had an expectation that buildings will sense and respond like living organisms. IBM said it is working with The U.S. General Services Administration (GSA) to develop and install advanced smart building technology in 50 of the federal government’s highest energy-consuming buildings. “Part of GSA’s larger smart building strategy, this initiative connects building management systems to a central cloud-based platform, improving efficiency and saving up to $15 million in taxpayer dollars annually. IBM is also helping the second largest school district in the U.S. become one of the greenest and most sustainable by making energy conservation and cost savings as easy as sending a text message,” IBM stated.

 

 

[Source:- Javaworld]

 

Lift language opens the door to cross-platform parallelism

Lift language opens the door to cross-platform parallelism

Wouldn’t it be great to write code that runs high-speed parallel algorithms on just about every kind of hardware out there, and without needing to be hand-tweaked to run well on GPUs versus CPUs?

That’s the promise behind a new project being developed by professors and students from the University of Edinburgh and the University of Münster, with support from Google. Together they’re proposing a new open source functional language, called “Lift,” for writing algorithms that run in parallel across a wide variety of hardware.

Lift creates code for OpenCL, a programming system designed to target CPUs, GPUs, and FPGAs alike as well as to automatically generate optimizations for each of those hardware types.

OpenCL can be optimized “by hand” to improve performance in different environments — on a GPU versus a regular CPU, for instance. Unfortunately, those optimizations aren’t portable across hardware types, and code has to be optimized for CPUs and GPUs separately. In some cases, OpenCL code optimized for GPUs won’t even work at all on a CPU. Worse, the optimizations in question are tedious to implement by hand.

Lift is meant to work around all this. In language-hacker terms, Lift is what’s called an “intermediate language,” or IL. According to a paper that describes the language’s concepts, it’s intended to allow the programmer to write OpenCL code by way of high-level abstractions that map to OpenCL concepts. It’s also possible for users to manually declare functions written in “a subset of the C language operating on non-array data types.”

When Lift code is compiled to OpenCL, it’s automatically optimized by iterating through many possible versions of the code, then testing their actual performance. This way, the results are not optimized in the abstract, as it were, for some given piece of hardware. Instead, they’re based on the actual performance of that algorithm.

One stated advantage for targeting multiple hardware architectures with a given algorithm is it allows the same distributioned application to run on a wider variety of hardware, and to take advantage of heterogenous architectures. If you have a system that has a mix of CPU, GPU, and FPGA hardware or two different kinds of GPU, the same application can in theory take advantage of all of those resources simultaneously. The end result is easier to deploy, since it’s not confined to any one kind of setup.

 

 

[Source:- Javaworld]

Glue, stitch, cobble: Weighing DIY container management

 

 

You’ve been tasked with helping your company stay competitive by modernizing your IT organization’s delivery of developed applications. Your company has already embraced virtualization and perhaps dabbled in the public cloud. Containers look like the next big thing for you, so you’re considering how to bring container technology to your organization. Some thing needs to create containers on compute resources and network them together. On the drawing board, you’re considering these general components:

You start doing the research. You soon discover that cloud management platforms, PaaS, and container management platforms are all readily available as prepackaged software and services. Even the individual components that make up those packages are available in Open Source Land. “Hmm,” you think, “Why pay anyone for a platform when the parts are there to do this myself?”

For a brief moment, you’re taken all the way back to kindergarten. The teacher starts crafting class and opens the drawer to an array of fun-looking parts. Pastel paper, glitter, and bows! You’re ready to craft that masterpiece. All you need is a bottle of glue!

After a blink, you’re back to the IT drawing board, laying out the parts for your future container management platform in greater detail:

  • Build tools
  • Servers/OSes
  • Container runtime
  • Container-to-container networking
  • Ingress networking
  • Firewall
  • Load balancer
  • Database
  • Storage
  • DNS

All you need is the “glue” to bind these parts together.

Naturally, connecting those different parts requires varying degrees of development effort. We’ve simplified this spectrum into four general “glue levels” of effort.

Glue level 1: Direct component-to-component bridging

In this case, a component has the capability to interface directly with the next logical component in the application deployment workflow.

Let’s assume you have a Jenkins platform and an instance of Docker Engine. Get Jenkins to build code, then create a Docker image. Better yet, have Jenkins call Docker Engine itself and point Docker to your newly created image.

Glue level 2: Basic scripting to bridge components

In this case, a component does not have the capability to interface with the next logical component in the application deployment workflow.

For example, with all nodes in an instance of a Docker Swarm, if a deployed service runs on port 80, then all nodes in the cluster lock down port 80, whether or not the particular node is running an instance of the container.

Let’s say you have another application that needs to listen on port 80. Because the whole Docker Swarm has already locked down port 80, you’ll have to use an external load balancer that’s tied in with DNS to listen to, for example, appA.mycluster.com and appB.mycluster.com (both listening at port 80 at the ingress side of the load balancer).

docker swarm portsApcera

After the containers have been deployed by an external script, you’ll have to interface with the load balancer to configure it to listen to the app and forward to the appropriate node.

Glue level 3: Scripting to manage components

In this case, your workflow finishes from one component and transitions to multiple separate components. At this point, you’re creating a middle-tier component that needs to maintain state and possibly coordinate workflows. You may have access to component automation (like HashiCorp’s Terraform or Red Hat CloudForms), but you still need a controlling entity that understands the application workflow and state.

Let’s say you have multiple Cloud Foundry instances with an application consisting of a web front-end container, a logic processing container, and an email generation container. You happen to want those containers on the separate Cloud Foundry instances. Even if you don’t need to create a cloud-spanning application, you may want to run applications in different clouds or move applications between clouds. This will require coordination outside of those platforms’ instances.

Assuming you’ve already laid the networking groundwork to connect those Cloud Foundry instances, your custom platform will have to interface with each instance, ship and run the containers, and network those containers appropriately.

Glue level 4: Your own enterprise automation at a level above the deployment workflow

In this case, you have enough glue for a basic start-to-finish workflow from source to deployment, but now you are considering enterprise-level features, including:

  • Automated provisioning and updating of the cluster control software, across multiple or hybrid clouds
  • Advanced container scheduling (for example, affinity between application containers)
  • Establishing role or attribute-based access controls that map organizational structures to rights on the platform
  • Resource quotas
  • Automatic firewall/IPtables management
  • Governance via a policy framework

Here is a simple example of one of the possibilities from a non-DIY alternative, the Apcera Platform. Let’s say your company has these business rules:

  1. Applications from development must run in AWS
  2. Applications in production must run in OpenStack (on-premises)

In the Apcera Platform, these business rules are translated by the admin as:

on job::/dev {
  schedulingTag.hard aws
}

on job::/production {
  schedulingTag.hard openstack
}

When a user (or automation) is identified as part of the /dev or /production namespace in the Apcera Platform, any applications deployed by that user (or automation) will be automatically deployed on the runtime components labeled with aws or openstack, appropriately. Users can either specify a tag when deploying applications (which will be checked by the policy system against a list of allowable tags) or not specify a tag and let the platform choose a runtime component automatically. Because Apcera labels can be arbitrarily defined, admins can create deployment policy for things like requiring “ssd” performance or “special-CPUs.”

Once you have built a platform that spans both AWS and OpenStack (as a single “cluster” or multiple clusters glued together), it may be an easy matter to allow operator choice of locations. With Docker Swarm, it’s quite easy with service constraints:

$ docker service create \
  —name redis_2 \
  —constraint ‘node.id == 2ivku8v2gvtg4’ \
  Redis:3.0.6

In this example, an operator chooses to deploy Redis via Docker Swarm to the specified Docker Engine node. While this is great for operator choice, this choice is not enforced. How do you enforce the company policy of deploying only to the on-premises OpenStack instance if this is a production job (per the company policy, above)?

How long are you willing to wait for the community (in this specific case, brokered through Docker Inc.) to implement this type of enforcement?

Let’s assume you’re left with coding this simple placement policy enforcement yourself. Let’s consider the planning for this effort:

  • You’d have to lock out all general access to Docker except for your enforcement automation.
  • Your enforcement automation has to be some kind of server that can proxy requests from clients.
  • You’d need to identify clients as individual users or members of some group. Do you want to run your own user management or create an interface to your enterprise LDAP server?
  • You’d need to associate the user/group with “production.”
  • You’d need to create a rule framework that permits an entry that translates to “jobs from production can only deploy to OpenStack Docker nodes.”
  • You’d need to create a way to track the node.ids of the Docker Swarm nodes that run on OpenStack.
  • You’d need to keep track of the resources available on each node to see if they can handle the resource requirements of your Docker image.
  • You’d need to understand the resource requirements of your Docker image.

What if, instead of a hard requirement that applications have to run on specific nodes, that sometimes you’re OK with a soft requirement? That is, make a best effort to deploy on the specified nodes, but failing that, deploy on other nodes. Do you really want to write your own scheduler code to fill in the gaps between what Docker offers? Apcera does all of this container management scheduling (via hard and soft tags, and more) already.

All of this glue code you’d have to create yourself, simply to solve the problem of enforcing where your applications can run. What about enforcing build dependencies as policy? Or a resource quota policy? Or virtual networking policy? Do you really want to write your own policy engine? Apcera was created to not only automate these tasks, but to provide a unified policy framework to govern all of them.

 

 

[Source:- Javaworld]

China didn’t steal your job—I did

China didn't steal your job—I did

The most discussed issue in the last election was the plight of the so-called white working class. The story goes that hardworking people had their jobs shipped to Mexico thanks to NAFTA. The second idea is that immigrants have stolen working-class jobs. The kicker is to blame the nation of China.

These ideas attempt to explain why the Rust Belt is idle, but they’re all wrong. Neither the Mexicans nor the Chinese stole those jobs. I did.

I didn’t do it alone, of course. You and the other members of the technology industry that came before us did the bulk of the work. And guess what? If factories come back to the United States as a result of new policy, they will be run by robots.

The boom in the use of less expensive labor overseas was fueled by cheap shipping costs and a simple labor-versus-capital decision. The cost of investing in new equipment in the United States is higher than employing people overseas to produce an item. In some industries, investing in capital is simply riskier. Think about fashion or the latest toy or trinket: If you set up a manufacturing line to make it and it’s only popular for a season or a year, then you’ve risked a lot for a relatively small margin.

On the flip side, this is also why you’ve seen pharmaceutical plants remain in the United States. Thanks to long-term patent protections, it isn’t as risky to automate a factory here. In fact, between liability and the protection of trade secrets, it’s probably less risky than using cheap labor overseas and shipping product. But make no mistake, these aren’t blue-collar jobs going to high school graduates. These factories are highly automated—and monitored by white-collar workers.

If tariffs were increased on goods produced using cheap labor overseas, then of course some factories would move here. Even in those cases, very little of the work would then be done by high school graduates. Gone are the jobs done by Eminem in “8 Mile,” where someone yells “up!” and “down!” while another person stamps sheet metal with a heavy press. Robots can do that easily.

The equation isn’t much different for undocumented immigrants. Farms have invested heavily in capital equipment over the years—with the “last mile” handled by low-paid guest or undocumented workers. If those workers are ejected from the United States, you can bet agribusiness will invest in automation to replace the manual labor.

Capital tends to win in the end. Why? Technology—that is, “we”—tend to make investing in tech cheaper or more productive than labor eventually. Whether we’re designing robots to replace factory workers or developing machine learning to make administrative assistants redundant, we help justify technology purchases rather than hiring messy, expensive, unpredictable humans.

I strongly believe this is better for us all in the end, but the economic and social costs in the short-to-medium term are high. Relentless automation skews the distribution of wealth, undercutting relative economic power of the middle and working class versus the richest among us—and it ultimately hurts overall economic growth because you reduce the number of people capable of buying whatever goods the economy produces.

The solution to this problem is not as simple as “drill baby drill” or exiting NAFTA or slapping 35 percent tariffs on China. We need to take a holistic look at economic policy, education reform, and welfare spending—if not out of the goodness of our hearts, then for the long-term economic well-being of us all.

 

 

[Source:- Javaworld]

Android Studio for beginners, Part 4: Advanced tools and plugins

android studio plugins and extensions

Android Studio offers a rich palette of development tools, and it’s compatible with many plugins. The first three articles in this series focused on basic tools for building simple mobile apps. Now you’ll get acquainted with some of the more advanced tools that are part of Android Studio, along with three plugins you can use to extend Android Studio.

We’ll start with Android Device Monitor, Lint, and Android Monitor–three tools you can use to debug, inspect, and profile application code in Android Studio. Then I’ll introduce you to plugins ADB Idea, Codota Code Search, and Project Lombok.

Debugging with Android Device Monitor

Android Device Monitor is an Android SDK tool for debugging failing apps. It provides a graphical user interface for the following SDK tools:

  • Dalvik Debug Monitor Server (DDMS): A debugging tool that provides port-forwarding services, screen capture on the device, thread and heap information on the device, logcat, process, radio state information, incoming call and SMS spoofing, location data spoofing, and more.
  • Tracer for OpenGL ES: A tool for analyzing OpenGL for embedded systems (ES) code in your Android apps. It lets you capture OpenGL ES commands and frame-by-frame images to help you understand how your graphics commands are being executed.
  • Hierarchy Viewer: A graphical viewer for layout view hierarchies (the layout view) and for magnified inspection of the display (the pixel perfect view). This tool can help you debug and optimize your user interface.
  • Systrace: A tool for collecting and inspecting traces (timing information across an entire Android device). A trace shows where time and CPU cycles are being spent, displaying what each thread and process is doing at any given time. It also inspects the captured tracing information to highlight problems that it observes (from list item recycling to rendering content) and provide recommendations about how to fix them.
  • Traceview: A graphical viewer for execution logs that your app creates via the android.os.Debug class to log tracing information in your code. This tool can help you debug your application and profile its performance.

To launch Android Device Monitor from your command line, execute the monitorprogram in your Android SDK’s tools directory. If you prefer to run the tool from Android Studio, choose Tools > Android > Android Device Monitor.

You might remember from Part 1 that I used Android Studio to launch my W2A example app in the Nexus 4 emulator. I then launched Android Device Monitor from Android Studio. Figure 1 shows the resulting screen.

androidstudiop4 figure1
Figure 1. The Devices tab appears when DDMS is selected.

The Devices tab shows all accessible devices, which happens to be the emulated Nexus 4 device in this example. Underneath the highlighted device line is a list of currently visible android.app.Activity subclass objects.

I highlighted the W2A activity object identified by its ca.javajeff.w2a package name, then clicked Hierarchy View to activate the Hierarchy Viewer tool. Figure 2 shows the result.

androidstudiop4 figure2
Figure 2. The layout hierarchy of the activity screen is shown in the Tree View pane.

Hierarchy Viewer displays a multipane user interface. The Tree View pane presents a diagram of the activity’s hierarchy of android.view.View subclass objects. The Tree Overview pane offers a smaller map representation of the entire Tree View pane. The Layout View pane (whose contents are not shown in Figure 2) reveals a block representation of the UI. See “Optimizing Your UI” to learn more about the Hierarchy Viewer tool and these panes.

If you attempt to run Hierarchy Viewer with a real (non-emulated) Android device, you could experience the error messages that appear in Figure 3.

androidstudiop4 figure3
Figure 3. Hierarchy Viewer often has trouble with real Android devices.

These messages refer to the view server, which is software running on the device that returns View objects diagrammed by Hierarchy Viewer. Production-build devices return these error messages to strengthen security. You can overcome this problem by using the ViewServer class that was created by Google software engineer Romain Guy.

Inspecting code with Lint

Lint is an Android SDK code-inspection tool for ensuring that code has no structural problems. You can use it to locate issues such as deprecated elements, or API calls that aren’t supported by your target API.

Although Lint can be run from the command line, I find it more helpful to run this tool from within Android Studio. Select Analyze > Inspect Code to activate the Specify Inspection Scope dialog box shown in Figure 4. Then select your desired scope (whole project, in this case), and click the OK button to perform the analysis. The results will appear in the Inspection Results window, where they are organized by category.

androidstudiop4 figure4
Figure 4. I’ve decided to inspect the entire project.

As you can see in Figure 5, Lint has spotted a few issues:

androidstudiop4 figure5
Figure 5. Lint reports that the androidAnimation  field could have been declared private.

Lint also complained about the following:

  • A missing contentDescription attribute on the ImageView element in main.xml hampers the app’s accessibility.
  • The root LinearLayout element in main.xml paints the background white (#ffffff) with a theme that also paints a background (inferred theme is @style/AppTheme). Overdrawing like this can hurt performance.
  • The dimens.xml file specifies three dimensional resources that are not used. Specifying unused resources is inefficient.
  • On SDK v23 and up, the app data will be automatically backed up and restored on app install. When you specify an @xml resource that configures which files to backup, consider adding the attribute android:fullBackupContent on the application element in AndroidManifest.xml; otherwise you might face a security issue.
  • Support for Google app indexing is missing.
  • I stored android0.png, android1.png, and android2.png in drawable, which is intended for density-independent graphics. For a production version of the app, I should have moved them to drawable-mdpi and considered providing higher and lower resolution versions in drawable-ldpi, drawable-hdpi, and drawable-xhdpi. No harm is done in this example, however.
  • Lint checked my spelling, noting the reference to javajeff in manifestelement’s package attribute, in AndroidManifest.xml.

See “Improve Your Code with Lint” to learn more about using Lint in Android Studio.

Profiling with Android Monitor

Profiling running apps to find performance bottlenecks is an important part of app development. Android Device Monitor’s Traceview tool offers some profiling support. Android Monitor offers even more.

Android Monitor is an Android Studio component that helps you profile app performance to optimize, debug, and improve yours apps. It lets you monitor the following aspects of apps running on hardware and emulated devices:

  • Log messages (system-defined or user-defined)
  • Memory, CPU, and GPU usage
  • Network traffic (hardware device only)

Android Monitor provides real-time information about your app via various tools. It can capture data as your app runs and store it in a file that you can analyze in various viewers. You can also capture screenshots and videos as your app runs.

You can access Android Monitor via Android Studio’s Android Monitor tool window. Select View > Tool Windows > Android Monitor or just press Alt+6:

androidstudiop4 figure6Figure 6. The logcat pane shows log messages for my Amazon Kindle device.

Figure 6 reveals the Android Monitor tool window, which presents drop-down list boxes that identify the device being monitored (in this case, on my Amazon Kindle Fire device) and the app being debugged on the device. Because ADB integration hasn’t been enabled, “No Debuggable Applications” appears in the latter list. Check Tools > Android > Enable ADB Integration to enable ADB integration.

After enabling ADB integration, I observed that “No Debuggable Applications” was replaced in the drop-down list with “ca.javajeff.w2a,” the package name for the W2A application that was running on my Kindle.

Below the two list boxes are a pair of tabs: logcat and Monitors. The former tab shows logged messages from the device and the latter tab reveals graphics-based memory, CPU, network, and GPU monitors (see Figure 7).

androidstudiop4 figure7
Figure 7. The GPU monitor is disabled for Android 4.0.3, which is the Android version that runs on my Kindle.

The memory monitor shown in Figure 7 reveals that the app occupies almost 13 megabytes and its subsequent memory usage is constant, which isn’t surprising because the app doesn’t make any explicit memory allocations, and the underlying APIs probably don’t require much additional memory. The CPU monitor shows only a slight amount of CPU use via a narrow red line about 1 minute into the monitoring. This usage arose from clicking the Animate button several times. No networking activity is displayed because the app isn’t making network requests. Finally, the GPU monitor is disabled because I’m running an older version of Android (4.0.3), which doesn’t support GPU monitoring.

The left side of the Android Monitor tool window contains a small tool bar with buttons for obtaining a screenshot (the camera icon), recording the screen, obtaining system information (activity manager state, package information, memory usage, memory use over time, and graphics state), terminating the application, and obtaining help. I clicked the camera button and obtained the screenshot shown in Figure 8.

androidstudiop4 figure8
Figure 8. Click the camera button on the left side of the Android Monitor tool window to obtain a screenshot.

See “Android Monitor Overview” to learn more about Android Monitor.

Extending Android Studio apps with plugins

Android Studio’s plugins manager makes it very easy to find and install plugins. Activate the plugin manager by selecting File > Settings followed by Pluginsfrom the Settings dialog box:

androidstudiop4 figure9
Figure 9. The Settings dialog box shows all installed plugins.

Next, click Browse repositories . . . to activate the Browse Repositories dialog box, which presents a full list of supported plugins:

androidstudiop4 figure10
Figure 10. The pane on the right presents detailed information about the selected plugin.

I’ll introduce three useful plugins–ADB Idea, Codota Code Search, and Project Lombok– and show you how to install and use them.

ADB Idea

ADB Idea speeds up your day-to-day Android development by providing fast access to commonly used ADB commands, such as starting and uninstalling an app:

androidstudiop4 figure11
Figure 11. Click Install to install ADB Idea.

Select ADB Idea in the repository list of plugins and then click the Install button. Android Studio proceeds to download and install the plugin. It then relabels Install to Restart Android Studio. Restarting activates ADB Idea.

Android Studio lets you access ADB Idea from its Tools menu. Select Tools > Android > ADB Idea and choose the appropriate command from the resulting pop-up menu:

Jefandroidstudiop4 figure12
Figure 12. Select the appropriate ADB command from the pop-up menu.

The app must be installed before you can use these commands. For example, I selected ADB Restart App and observed the following messages as well as the restarted app on my Amazon Kindle device.

androidstudiop4 figure13
Figure 13. Each message identifies the app, operation, and device.

Codota Code Search

Use the Codota Code Search plugin to access the Codota search engine, which lets you look through millions of publicly available Java source code snippets (on GitHub and other sites) for solutions to coding problems:

androidstudiop4 figure14
Figure 14. Click Install to install Codota Code Search.

To install this plugin, select Codota in the repository list of plugins and then click the Install button. After Android Studio has downloaded and installed the plugin, it will relabel the Install button to Restart Android Studio. Restarting activates Codota Code Search.

Android Studio lets you access Codota Code Search by right-clicking on Java code in the editor window and selecting the Search Open Source (Codota) menu item (or by pressing Ctrl+K), as shown in Figure 15.

androidstudiop4 figure15
Figure 15. Click Search Open Source (Codota) to access the Search Codota dialog box.

Android Studio responds by displaying the Search Codota dialog box whose text field is blank or populated with the full package name of the Java API type that was right-clicked. Figure 16 shows this dialog box.

androidstudiop4 figure16
Figure 16. Press Enter to initiate the search for Java code snippets related to ImageView.

Codota Code Search passes the search text to the Codota search engine and presents vertically scrollable search results in a CodotaView tool window.

 

 

 

[Source:- Javaworld]