Docker Swarm beats Kubernetes? Not so fast

Docker Swarm beats Kubernetes? Not so fast

Does Docker’s Swarm container orchestration system outperform Google’s Kubernetes? A recent benchmark says so, but the bigger picture is more complex.

According to a study commissioned by Docker from technology consultant Jeff Nickoloff, Swarm outperformed Kubernetes in container startup time. Most of the Swarm-managed containers started in under a second, while Kubernetes took 2 to 3 seconds.

Nickoloff documented his testing in detail, examining both container startup time and system responsiveness under load. Both services ran on a 1,000-node cluster running a maximum of 30,000 containers. On a cluster that was 90 to 99 percent full, Kubernetes startup time rose to 15 seconds, but Nickoloff discarded these results on the grounds that they were likely due to issues that are already being addressed.

Docker said Swarm’s simpler architecture was a key reason for its speed. The Kubernetes stack involves interactions between six other components besides Docker, while Docker Swarm has only two others.

Short and predictable container startup times help Docker obtain operational insights from “distributed applications that need near-real-time responsiveness.” With containers, says Docker, it’s not enough to say a container has been scheduled to run, as Kubernetes does; it’s important to know how long it actually took for the container to start.

In a blog post, Docker states, “In a world where containers may only live for a few minutes, having a significant delay in gathering real-time insight into the state of the environment means you never really know what’s happening in your infrastructure at any particular moment in time.”

Not everyone saw Nickoloff’s findings as a slam dunk. Kelsey Hightower, formerly of CoreOS and now with Google’s Cloud Platform division (where Kubernetes originally took flight), tweeted, “Kubernetes and Docker Swarm focus on different things.” Kubernetes is more of an all-in-one framework for distributed systems, and its complexity stems from offering “a unified set of APIs and strong guarantees about cluster state.”

“Does Docker Swarm win in a few isolated benchmarks?” tweeted Hightower. “Yep. Can you really compare the two projects? Right now the answer is no.”

Some of Nickoloff’s comments reflect that as well, as he was impressed by the “remarkable” parallel container scheduling functions available in Kubernetes’  replication controller, useful in environments where containers have a short lifetime. “Using a Kubernetes replication controller,” wrote Nickoloff, “I was able to create 3,000 container replicas in under 155 seconds.”

 

[Source:- Javaworld]

Turns out machine learning is a champ at fixing buggy code

mit probablistic patches code

Here’s yet another new application of machine learning:  MIT has developed a system for fixing errors in bug-riddled code.

The new machine-learning system developed by researchers at MIT can fix roughly 10 times as many errors as its predecessors could, the researchers say. They presented a paper describing the new system, dubbed “Prophet,” at the Principles of Programming Languages symposium last month.

Essentially, the system works by studying patches already made to open-source computer programs in the past in order to learn their general properties. Prophet was given 777 errors and fixes in eight common open-source applications stored in the online repository GitHub.

The system then applies that knowledge to produce new repairs for new bugs in a different set of programs.

Fan Long, a graduate student in electrical engineering and computer science who was co-author on the paper, had actually already developed an algorithm that attempts to repair program bugs by systematically modifying program code. The only problem was, it could take a prohibitively long time.

The new machine-learning system works in conjunction with that earlier algorithm but ranks possible patches according to the probability that they are correct before subjecting them to time-consuming tests.

The researchers tested the system on a set of 69 real-world errors that had cropped up in eight popular open-source programs. Where earlier bug-repair systems were able to repair one or two of the bugs, the new system repaired between 15 and 18, depending on whether it settled on the first solution it found or was allowed to run longer.

That’s certainly useful, but the implications could be even bigger, according to Martin Rinard, a professor of electrical engineering and computer science who was also co-author on the paper.

“One of the most intriguing aspects of this research is that we’ve found that there are indeed universal properties of correct code that you can learn from one set of applications and apply to another set of applications,” Rinard explained. “If you can recognize correct code, that has enormous implications across all software engineering. This is just the first application of what we hope will be a brand-new, fabulous technique.”

 
[Source:- Javaworld]

SamsaraJS library juices mobile Web UIs

SamsaraJS: Famous fork juices mobile Web UIs

The former chief architect of the Famo.us JavaScript library has moved on to developing his own JavaScript UI project for the mobile Web, called SamsaraJS.

Described on GitHub as a “functional reactive library for animating layout,” SamsaraJS is the brainchild of David Valdman, who left Famo.us — now known as Famous — in August 2014. The library, which recently released version 0.2.0, grew out of his work there.

“SamsaraJS gives Web developers a tool to create native-like app experiences,” Valdman said in an email. The library was created to solve performance issues on the mobile Web.

SamsaraJS provides a language for positioning, orienting, and sizing DOM elements and animating these properties. Everything from the user input to the rendering pipeline is a stream, so building a user interface becomes the art of composing streams, according to Samsara’s GitHub explanation.

Valdman said Web developers are held back primarily by two factors: performance on mobile, and expressiveness. “By expressiveness, I mean a way to think about interactivity in user interfaces that isn’t watered down. With CSS3 we could finally have animation,” he said. “But coordinating the animation of dozens of items, tying that to user gestures, incorporating physics and 3D space, that’s well out of the reach of CSS3.” SamsaraJS is trying to make this simple without compromising on performance, he said.

Asked how SamsaraJS compares with other JavaScript technologies including Famo.us, React, or Angular, Valdman said it was focused on one element: layout. “That means x, y, z positions; heights and widths; opacity. These are the kinds of things that change at 60fps when animation and responsive design are involved. These are also the greatest barriers to performance. Other MVC frameworks are concerned with content, and mostly leave layout to the developer.”

Since content does not change at 60fps, these frameworks focus on very different problems: data-binding, routing, and so on, said Valdman. “SamsaraJS is meant to be used with one of these frameworks, where it controls the layout and the framework populates it with content. This harkens back to the original separation of concerns of the Web.”

 

[Source:- Javaworld]

Open source Java projects: Apache Phoenix

flickr kdooley jw osjp apache phoenix

Apache Phoenix is a relatively new open source Java project that provides a JDBC driver and SQL access to Hadoop’s NoSQL database: HBase. It was created as an internal project at Salesforce, open sourced on GitHub, and became a top-level Apache project in May 2014. If you have strong SQL programming skills and would like to be able to use them with a powerful NoSQL database, Phoenix could be exactly what you’re looking for!

This installment of Open source Java projects introduces Java developers to Apache Phoenix. Since Phoenix runs on top of HBase, we’ll start with an overview of HBase and how it differs from relational databases. You’ll learn how Phoenix bridges the gap between SQL and NoSQL, and how it’s optimized to efficiently interact with HBase. With those basics out of the way, we’ll spend the remainder of the article learning how to work with Phoenix. You’ll set up and integrate HBase and Phoenix, create a Java application that connects to HBase through Phoenix, and you’ll write your first table, insert data, and run a few queries on it.

HBase: A primer

Apache HBase is a NoSQL database that runs on top of Hadoop as a distributed and scalable big data store. HBase is a column-oriented database that leverages the distributed processing capabilities of the Hadoop Distributed File System (HDFS) and Hadoop’s MapReduce programming paradigm. It was designed to host large tables with billions of rows and potentially millions of columns, all running across a cluster of commodity hardware.

Apache HBase combines the power and scalability of Hadoop with the ability to query for individual records and execute MapReduce processes.

In addition to capabilities inherited from Hadoop, HBase is a powerful database in its own right: it combines real-time queries with the speed of a key/value store, a robust table-scanning strategy for quickly locating records, and it supports batch processing using MapReduce. As such, Apache HBase combines the power and scalability of Hadoop with the ability to query for individual records and execute MapReduce processes.

HBase’s data model

HBase organizes data differently from traditional relational databases, supporting a four-dimensional data model in which each “cell” is represented by four coordinates:

  1. Row key: Each row has a unique row key that is represented internally by a byte array, but does not have any formal data type.
  2. Column family: The data contained in a row is partitioned into column families; each row has the same set of column families, but each column family does not need to maintain the same set of column qualifiers. You can think of column families as being similar to tables in a relational database.
  3. Column qualifier: These are similar to columns in a relational database.
  4. Version: Each column can have a configurable number of versions. If you request the data contained in a column without specifying a version then you receive the latest version, but you can request older versions by specifying a version number.

Figure 1 shows how these four dimensional coordinates are related.

osjp phoenix fig01
Figure 1. HBase data mode

The model in Figure 1 shows that a row is comprised of a row key and an arbitrary number of column families. Each row key is associated to a collection of “rows in tables,” each of which has its own columns. While each table must exist, the columns in tables may be different across rows. Each column family has a set of columns, and each column has a set of versions that map to the actual data in the row.

If we were modeling a person, the row key might be the person’s social security number (to uniquely identify them), and we might have column families like address, employment, education, and so forth. Inside the address column family we might have street, city, state, and zip code columns, and each version might correspond to where the person lived at any given time. The latest version might list the city “Los Angeles,” while the previous version might list “New York.” You can see this example model in Figure 2.

osjp phoenix fig02

Figure 2. Person model in HBase

In sum, HBase is a column-oriented database that represents data in a four dimensional model. It is built on top of the Hadoop Distributed File System (HDFS), which partitions data across potentially thousands of commodity machines. Developers using HBase can access data directly by accessing a row key, by scanning across a range of row keys, or by using batch processing via MapReduce.

Bridging the NoSQL gap: Apache Phoenix

Apache Phoenix is a top-level Apache project that provides an SQL interface to HBase, mapping HBase models to a relational database world. Of course, HBase provides its own API and shell for performing functions like scan, get, put, list, and so forth, but more developers are familiar with SQL than NoSQL. The goal of Phoenix is to provide a commonly understood interface for HBase.

In terms of features, Phoenix does the following:

  • Provides a JDBC driver for interacting with HBase.
  • Supports much of the ANSI SQL standard.
  • Supports DDL operations such as CREATE TABLE, DROP TABLE, and ALTER TABLE.
  • Supports DML operations such as UPSERT and DELETE.
  • Compiles SQL queries into native HBase scans and then maps the response to JDBC ResultSets.
  • Supports versioned schemas.

In addition to supporting a vast set of SQL operations, Phoenix is also very high performing. It analyzes SQL queries, breaks them down into multiple HBase scans, and runs them in parallel, using the native API instead of MapReduce processes.

Phoenix uses two strategies–co-processors and custom filters–to bring computations closer to the data:

  • Co-processors perform operations on the server, which minimizes client/server data transfer.
  • Custom filters reduce the amount of data returned in a query response from the server, which further reduces the amount of transferred data. Custom filters are used in a few ways:
    1. When executing a query, a custom filter can be used to identify only the essential column families required to satisfy the search.
    2. A skip scan filter uses HBase’s SEEK_NEXT_USING_HINT to quickly navigate from one record to the next, which speeds up point queries.
    3. A custom filter can “salt the data,” meaning that it adds a hash byte at the beginning of row key so that it can quickly locate records.

In sum, Phoenix leverages direct access to HBase APIs, co-processors, and custom filters to give you millisecond-level performance for small datasets and second-level performance for humongous ones. Above all, Phoenix exposes these capabilities to developers via a familiar JDBC and SQL interface.

Get started with Phoenix

In order to use Phoenix, you need to download and install both HBase and Phoenix. You can find the Phoenix download page (and HBase compatibility notes) here.

Download and setup

At the time of this writing, the latest version of Phoenix is 4.6.0 and the download page reads that 4.x is compatible with HBase version 0.98.1+. For my example, I downloaded the latest version of Phoenix that is configured to work with HBase 1.1. You can find it in the folder: phoenix-4.6.0-HBase-1.1/.

Here’s the setup:

  1. Download and decompress this archive and then use one of the recommended mirror pages here to download HBase. For instance, I selected a mirror, navigated into the 1.1.2 folder, and downloaded hbase-1.1.2-bin.tar.gz.
  2. Decompress this file and create an HBASE_HOME environment variable that points to it; for example, I added the following to my ~/.bash_profile file (on Mac): export HBASE_HOME=/Users/shaines/Downloads/hbase-1.1.2.

Integrate Phoenix with HBase

The process to integrate Phoenix into HBase is simple:

  1. Copy the following file from the Phoenix root directory to the HBase lib directory:phoenix-4.6.0-HBase-1.1-server.jar.
  2. Start HBase by executing the following script from HBase’s bin directory:./start-hbase.sh.
  3. With HBase running, test that Phoenix is working by executing the SQLLine console, by executing following command from Phoenix’s bin directory: ./sqlline.py localhost.

The SQLLine console

sqlline.py is a Python script that starts a console that connects to HBase’s Zookeeper address; localhost in this case. You can walk through an example that I am going to summarize in this section here.

First, let’s view all of the tables in HBase by executing !table:



0: jdbc:phoenix:localhost> !tables
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM                |                TABLE_NAME                |                TABLE_TYPE                |                 REMARKS  |
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+
|                                          | SYSTEM                                   | CATALOG                                  | SYSTEM TABLE                             |                          |
|                                          | SYSTEM                                   | FUNCTION                                 | SYSTEM TABLE                             |                          |
|                                          | SYSTEM                                   | SEQUENCE                                 | SYSTEM TABLE                             |                          |
|                                          | SYSTEM                                   | STATS                                    | SYSTEM TABLE                             |                          |
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+    


Because this is a new instance of HBase the only tables that exist are system tables. You can create a table by executing a create table command:



0: jdbc:phoenix:localhost> create table test (mykey integer not null primary key, mycolumn varchar);
No rows affected (2.448 seconds)


This command creates a table named test, with an integer primary key named mykeyand a varchar column named mycolumn. Now insert a couple rows by using the upsertcommand:



0: jdbc:phoenix:localhost> upsert into test values (1,'Hello');
1 row affected (0.142 seconds)
0: jdbc:phoenix:localhost> upsert into test values (2,'World!');
1 row affected (0.008 seconds)


UPSERT is an SQL command for inserting a record if it does not exist or updating a record if it does. In this case, we inserted (1,’Hello’) and (2,’World!’). You can find the complete Phoenix command reference here. Finally, query your table to see the values that you upserted by executing select * from test:


0: jdbc:phoenix:localhost> select * from test;

+------------------------------------------+------------------------------------------+
|                  MYKEY                   |                 MYCOLUMN                 |
+------------------------------------------+------------------------------------------+
| 1                                        | Hello                                    |
| 2                                        | World!                                   |
+------------------------------------------+------------------------------------------+
2 rows selected (0.111 seconds)


As expected, you’ll see the values that you just inserted. If you want to clean up the table, execute a drop table test command.

Java programming with Phoenix

Connecting to and interacting with HBase through Phoenix is as simple as connecting to any database using a JDBC driver:

  • Add the JDBC driver to your CLASSPATH.
  • Use the DriverManager to obtain a connection to the database.
  • Execute queries against the database.

I have uploaded the source code for this example to GitHub. I first set up a new Maven project and configured my POM file as shown in Listing 1.

Listing 1. pom.xml



<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.geekcap.javaworld</groupId>
    <artifactId>phoenix-example</artifactId>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>phoenix-example</name>
    <url>http://maven.apache.org</url>

    <properties>
        <java.version>1.6</java.version>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-core</artifactId>
            <version>4.6.0-HBase-1.1</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.0.2</version>
                <configuration>
                    <source>${java.version}</source>
                    <target>${java.version}</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <addClasspath>true</addClasspath>
                            <classpathPrefix>lib/</classpathPrefix>
                            <mainClass>com.geekcap.javaworld.phoenixexample.PhoenixExample</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <executions>
                    <execution>
                        <id>copy</id>
                        <phase>install</phase>
                        <goals>
                            <goal>copy-dependencies</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${project.build.directory}/lib</outputDirectory>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>



This POM file imports the Phoenix Core Maven dependency, which provides access to the Phoenix JDBC driver:


        
<dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-core</artifactId>
            <version>4.6.0-HBase-1.1</version>
        </dependency>


The POM file does some housekeeping work next: it sets the source compilation to Java 6, specifies that dependencies should be copied to the target/lib folder during the build, and makes the resulting JAR file executable for the main class,com.geekcap.javaworld.phoenixexample.PhoenixExample.

Listing 2 shows the source code for the PhoenixExample class.

Listing 2. PhoenixExample.java



package com.geekcap.javaworld.phoenixexample;

import java.sql.*;

public class PhoenixExample {

    public static void main(String[] args) {
        // Create variables
        Connection connection = null;
        Statement statement = null;
        ResultSet rs = null;
        PreparedStatement ps = null;

        try {
            // Connect to the database
            connection = DriverManager.getConnection("jdbc:phoenix:localhost");

            // Create a JDBC statement
            statement = connection.createStatement();

            // Execute our statements
            statement.executeUpdate("create table javatest (mykey integer not null primary key, mycolumn varchar)");
            statement.executeUpdate("upsert into javatest values (1,'Hello')");
            statement.executeUpdate("upsert into javatest values (2,'Java Application')");
            connection.commit();

            // Query for table
            ps = connection.prepareStatement("select * from javatest");
            rs = ps.executeQuery();
            System.out.println("Table Values");
            while(rs.next()) {
                Integer myKey = rs.getInt(1);
                String myColumn = rs.getString(2);
                System.out.println("\tRow: " + myKey + " = " + myColumn);
            }
        }
        catch(SQLException e) {
            e.printStackTrace();
        }
        finally {
            if(ps != null) {
                try {
                    ps.close();
                }
                catch(Exception e) {}
            }
            if(rs != null) {
                try {
                    rs.close();
                }
                catch(Exception e) {}
            }
            if(statement != null) {
                try {
                    statement.close();
                }
                catch(Exception e) {}
            }
            if(connection != null) {
                try {
                    connection.close();
                }
                catch(Exception e) {}
            }
        }
    }
}


Listing 2 first creates a database connection by passing jdbc:phoenix:localhost as the JDBC URL to the DriverManager class, as shown here:

connection = DriverManager.getConnection("jdbc:phoenix:localhost");

Just like in the shell console, localhost refers to the server running Zookeeper. If you were connecting to a production HBase instance, you would want to use the Zookeeper server name or IP address for that production instance. With ajavax.sql.Connection, the rest of the example is simple JDBC code. The steps are as follows:

  1. Create a Statement for the connection.
  2. Execute a series of statements using the executeUpdate() method.
  3. Create a PreparedStatement to select all the data that we inserted.
  4. Execute the PreparedStatement, retrieve a ResultSet, and iterate over the results.

You can build the project as follows: mvn clean install.

Then execute it with the following command from the target directory:

java -jar phoenix-example-1.0-SNAPSHOT.jar

You should see output like the following (note that I excluded the Log4j warning messages):

Table Values Row: 1 = Hello Row: 2 = Java Application 

You can also verify this from the Phoenix console. First execute a !tables command to view the tables and observe that JAVATEST is there:

0: jdbc:phoenix:localhost> !tables


+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM                |                TABLE_NAME                |                TABLE_TYPE                |                 REMARKS  |
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+
|                                          | SYSTEM                                   | CATALOG                                  | SYSTEM TABLE                             |                          |
|                                          | SYSTEM                                   | FUNCTION                                 | SYSTEM TABLE                             |                          |
|                                          | SYSTEM                                   | SEQUENCE                                 | SYSTEM TABLE                             |                          |
|                                          | SYSTEM                                   | STATS                                    | SYSTEM TABLE                             |                          |
|                                          |                                          | JAVATEST                                 | TABLE                                    |                          |
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+


Finally, query the JAVATEST table to see your data:



0: jdbc:phoenix:localhost> select * from javatest;

+------------------------------------------+------------------------------------------+
|                  MYKEY                   |                 MYCOLUMN                 |
+------------------------------------------+------------------------------------------+
| 1                                        | Hello                                    |
| 2                                        | Java Application                         |
+------------------------------------------+------------------------------------------+

Note that if you want to run this example multiple times you will want to drop the table using either the console or by adding the following to the end of Listing 2:

statement.executeUpdate("drop table javatest")

As you can see, using Phoenix is a simple matter of creating JDBC connection and using the JDBC APIs. With this knowledge you should be able to start using Phoenix with more advanced tools like Spring’s JdbcTemplate or any of your other favorite JDBC abstraction libraries!

Conclusion

Apache Phoenix provides an SQL layer on top of Apache HBase that allows you to interact with HBase in a familiar manner. You can leverage the scalability that HBase derives from running on top of HDFS, along with the multi-dimensional data model that HBase provides, and you can do it using familiar SQL syntax. Phoenix also supports high performance by leveraging native HBase APIs rather than MapReduce processes; implementing co-processors to reduce client/server data transfer; and providing custom filters that improve the execution, navigation, and speed of data querying.

Using Phoenix is as simple as adding a JAR file to HBase, adding Phoenix’s JDBC driver to your CLASSPATH, and creating a standard JDBC connection to Phoenix using its JDBC URL. Once you have a JDBC connection, you can use HBase just as you would any other database.

This Open source Java projects tutorial has provided an overview of both HBase and Phoenix, including the specific motivation for developing each of these technologies. You’ve set up and integrated Phoenix and HBase in your local working environment, and learned how to interact with Phoenix using the Phoenix console and through a Java application. With this foundation you should be well prepared to start building applications on top of HBase, using standard SQL.
[Source:- Javaworld]

Docker goes rootless — and that’s a good thing

Docker 1.10 goes rootless -- and that's a good thing

Docker 1.10, the latest version of the software containerization system, addresses one of its most long-standing criticisms.

Until now, containers have had to run as root under the Docker daemon, with various hair-raising (in)security implications. The solution in Docker 1.10 is a feature called user namespacing. Originally introduced as an experimental feature in version 1.9, it’s now generally available in version 1.10 along with a bundle of other improvements.

A safe space for your name

With user namespaces, privileges for the Docker daemon and container are handled separately, so each container can receive its own user-level privileges. Containers do not need root access on the host, although the Docker daemon still does.

However, Nathan McCauley, director of security at Docker, clarified in an email that user namespaces are currently available only for Linux. “Windows has its own isolation features that we’ll integrate with Docker,” he wrote. “On every platform we’ll aim to support every isolation feature.”

Docker has further expanded on user namespaces by providing a plug-in mechanism for authorization, so admins can configure how Docker deals with user access policies within their organization. Syscalls passed from containers can also be permitted, denied, or traced based on policy settings.

That the Docker runtime runs as root, and the security implicationsa> stemming from that, have long been chief criticisms of Docker. CoreOS was among the most vocal critics, and used the 1.0 release of its rkt container runtime to show how it is possible to run containers without root access. (Rkt can run existing Docker containers as-is.)

Docker has long been aware that having users in containers potentially run as root is problematic, but it took them several point revisions of Docker to mitigate that by way of namespace support and make it stable. CoreOS’s rkt currently hasexperimental support for such a feature.

Re-weaving the net

Docker 1.10 adds major improvements in two other areas. Docker Compose, the native Docker tool for creating multicontainer applications, has a new definition format that now includes ways to describe networks between containers, as supported by Docker’s networking subsystem. This means the work needed to describe a multicontainer application is spread across fewer places.

Networking has also been bolstered. The Docker daemon now includes its own DNS client, which is a way to allow container networks to automatically perform service discovery without the /etc/hosts hackery that was previously required. DNS queries can also be forwarded to an external server if needed.

This goes hand-in-hand with a new internal networking feature that lets containers have network traffic restricted to only their own private subnet by specifying a command-line flag.

Networking has been another long-standing Docker issue and was eventually solved by acquiring and integrating a third-party solution. The latest changes to Docker networking are being touted as a way to take the network topology created for a Dockerized application in development and deploy it in production without changes — addressing another persistent criticism stemming from the limits of Docker’s legacy networking model.

 

[Source:- Javaworld]

GitHub apologizes for ignoring community concerns

GitHub apologizes for ignoring community concerns

GitHub, under fire by developers for allegedly ignoring requests to improve the code-sharing site, has pledged to fix the issues raised.

Brandon Keepers, GitHub’s head of open source, wrote in a letter today that GitHub is sorry, and he acknowledged it has been slow to respond to frustrations.

“We’re working hard to fix this. Over the next few weeks, we’ll begin releasing a number of improvements to issues, many of which will address the specific concerns raised in the letter,” Keepers said. “But we’re not going to stop there. We’ll continue to focus on issues moving forward by adding new features, responding to feedback, and iterating on the core experience. We’ve also got a few surprises in store.”

More than 450 contributors to open source projects last month had posted a “Dear GitHub” letter on GitHub itself, expressing frustration with its poor response to problems and requests, including a need for custom fields for issues, the lack of a proper voting system for issues, and the need for issues and pull requests to automatically include contribution guideline boilerplate text. “We have no visibility into what has happened with our requests, or whether GitHub is working on them,” the letter said.

Keepers acknowledged that issues have not gotten much attention the past few years, which he called a mistake. He stressed that GitHub has not stopped caring about users and their communities. “However, we know we haven’t communicated that. So in addition to improving issues, we’re also going to kick off a few initiatives that will help give you more insight into what’s on our radar. We want to make sharing feedback with GitHub less of a black box experience and we want to hear your ideas and concerns regularly,” he said.

Comments at GitHub on Keepers’ letter were mostly positive. “Good to see it is at least being acknowledged. Curious to see what the improvements actually will be,” one commenter wrote. Many simply posted a thumbs-up emoji.

Forrester analyst Jeffrey Hammond called the users’ concerns legitimate and warned that GitHub cannot ignore them. “I don’t see [possible defections to other sites] as an immediate, existential risk yet — [this is] more like a shot across the bow,” he said. But “if enough of the community bolts all at once, the transition could be immediate.”

 
[Source:- Javaworld]

Npm Inc. explores foundation for JavaScript installer

NPM Inc. explores foundation for JavaScript installer

Npm, the command-line interface that developers run when using the popular Npm package manager for Node.js and JavaScript, will be moved to the jurisdiction of an independent foundation.

A governance model for the technology is under exploration, said Npm founder Isaac Schlueter, CEO of Npm Inc., which currently oversees the project. He hopes the move will expand participation in Npm’s development, as participating today could be awkward because the program is owned and maintained by a single company.

Plans call for completing the move by the end of this year, with Npm Inc. still participating. Other companies are already interested in working with the foundation, Schleuter said, though he would not reveal their names.

The command-line client works with the Npm registry, which features a collection of packages of open source code for Node.js. The Npm system lets developers write bits of code packaged as modules for purposes ranging from database connectors to JavaScript frameworks to command-line tools and CSS tooling.

Enterprise connectivity vendor Equinix, for example, will offer its upcoming AquaJS microservices framework via Npm. Schleuter has called the module system a “killer feature” of Node.js and a big reason for the server-side JavaScript platform’s success. Npm Inc. says there are nearly 242,000 total packages and about 3.5 billion downloads in the past month alone.

Schleuter said that efforts would be made to keep the project on strong footing, adding “what we really don’t want to do is break something that’s working.” There are transparent development processes already in place, he said, and in addition to encouraging more outside participation in NPM’s development, the foundation should ensure the project’s continuity.

Node.js itself was moved to an independent foundation, simply called the Node.js Foundation, last year, after gripes arose with Joyent’s handling of the project and a fork of the technology occurred called io.js. But io.js has since been folded back into Node.js. “I think [developing a governance model] will be a lot easier than it was with Node because the team is more on the same page and there are not as many hurdles to jump through,” Schlueter said.

Npm Inc. runs the open source Npm registry as a free service and will continue to do so after the foundation is formed. The company also offers tools and services to support the secure use of packages in a private or enterprise context.

 

[Source:- Javaworld]

You’re doing it wrong: 5 common Docker mistakes

You're doing it wrong: 5 common Docker mistakes

The newer the tool, the tougher it is to use correctly. Sometimes nobody — not even the toolmaker itself — knows how to use it right.

As Docker moves from a hyped newcomer to a battle-tested technology, early adopters have developed best practices and ideal setups to get the most out of it. Along the way, they’ve identified what works — and what doesn’t.

Here are five mistakes that come with using Docker, along with some advice on how to steer clear of them.

Using quick-and-dirty hacks to store secrets

“Secrets” cover anything that you would not want outsiders to see — passwords, keys, one-way hashes, and so on. The Docker team has enumerated some of the hacks people use store secrets, including environment variables, tricks with container layers or volumes, and manually built containers.

Many of these are done as quick hacks for the sake of convenience, but they can be quickly enshrined as standard procedure — or, worse, leak private information to the world at large.

Part of the problem stems from Docker not handling these issues natively. A couple of earlier proposals were closed for being too general, but one possibility currently under discussion is creating a pluggable system that can be leveraged by third-party products like Vault.

Keywhiz, another recommended storer of secrets, can be used in conjunction with volumes. Or users can fetch keys using SSH. But using environment variables or other “leaky” methods should be straight out.

Taking the “one process per container” rule as gospel

Running one process per container is a good rule of thumb — it’s in Docker’s own best practices document — but it’s not an absolute law. Another way to think about it is to have one responsibility per container, where all the processes that relate to a given role — Web server, database, and so on — are gathered together because they belong together.

Sometimes that requires having multiple processes in a single container, especially if you need instances of syslog or cron running inside the container. Baseimage-docker was developed to provide a baseline Linux image (and sane defaults) with those services.

If your reason for having a one-process container is to have a lean container, but you still need some kind of caretaker functionality (startup control, logging), Chaperonemight help, as it provides those functions with minimal overhead. It’s not yet recommended for production use, but according to the GitHub page, “if you are currently starting up your container services with Bash scripts, Chaperone is probably a much better choice.”

Ignoring the consequences of caching with Dockerfiles

If images are taking forever to build from Dockerfiles, there’s a good chance misuse or misunderstanding of the build cache is the culprit. Docker provides a few notesabout how the cache behaves, and the folks at devo.ps detail specific behaviors that can inadvertently invalidate the cache. (ADD, VOLUMES, and RUN commands are the biggest culprits.)

The reverse can also be true: Sometimes, you don’t want the cache to preserve everything, but purging the whole cache is impractical. The folks at CenturyLink have useful notes on when and how to selectively invalidate the cache.

Using Docker when a package manager will do

“Today Docker is usually used to distribute applications instead of just [used] for easier scaling,” says software developer Marc Scholten. “We’re using containers to avoid the downsides of bad package managers.”

If the goal is to simply grab a version of an application and try it out in a disposable form, Docker’s fine for that. But there are times when you really need a package manager. A package manager operates at a lower level of abstraction than a Docker image, provides more granularity, and automatically deals with issues like dependency resolution between packages.

Here and there, work is being done to determine how containers could be used to replace conventional package management altogether. CoreOS, for instance, employs containers as a basic unit of system management. But for now, containers (meaning Docker) are best suited for situations where the real issues are scale and the need to encapsulate multiple versions of apps without side effects.

Building mission-critical infrastructure without laying a foundation first

This ought to be obvious, but it always bears repeating: Docker, like any other tool, works best when used in conjunction with other best practices for creating mission-critical infrastructure. It’s a puzzle piece, not the whole puzzle.

Matt Jaynes of Valdhaus (formerly DevOps University) has noted that he sees “too many folks trying to use Docker prematurely,” without first setting up all the vital details around Docker. “Using and managing [Docker] becomes complex very quickly beyond the tiny examples shown in most articles promoting [it],” says Jaynes.

Automated setup, deployment, and provisioning tools, along with monitoring, least-privilege access, and documentation of the arrangement ought to be in place before Docker is brought in. If that sounds nontrivial, it ought to.

 

[Source:- Javaworld]

Java 9 to address GTK GUI pains on Linux

Plans are afoot to have Java 9 accommodate the GTK 3 GUI toolkit on Linux systems. The move would bring Java current with the latest version of the toolkit and prevent application failure due to mixing of versions.

The intention, according to a Java enhancement proposal on openjdk.net, would be to support GTK (GIMP Toolkit) 2 by default, with GTK 3 used when indicated by a system property. Java graphical applications based on JavaFX, Swing, or AWT (Advanced Window Toolkit) would be accommodated under the plan, and existing applications could run on Linux without modification with either GTK 2 or 3.

The proposal was sent to the openjfx-dev mailing list by Oracle’s Mark Reinhold, chief architect of the Java platform group at the company, which oversees Java’s development. Java 9 is expected to be available in March 2017.

“There are a number of Java packages that use GTK. These include AWT/Swing, JavaFX, and SWT. SWT has migrated to GTK 3, though there is a system property that can be used to force it to use the older version,” the proposal states. “This mixing of packages using different GTK versions causes application failures.”

The issue particularly affects applications when using the Eclipse development platform. The proposal also notes that while GTK 2 and 3 are now available by default on Linux distributions, this may not always be the case.

Also identified as GTK+, the cross-platform toolkit features widgets and an API and is offered as free software via the GNU Project. It has been used in projects ranging from the Apache OpenOffice office software suite to the Inkscape vector graphics editor to the PyShare image uploader.

An alternative to backing both GTK 2 and 3, according to the Java proposal, would be to migrate Java graphics to support only GTK 3, thus reducing efforts required in porting and testing. But this plan could result in a higher number of bugs not detected by testing, require additional effort with the AWT look and feel, and necessitate both or neither of JavaFX/Swing being ported. Such a port also would require more coordination between AWT and Swing.

But a former Java official at Sun Microsystems questioned the demand for this improvement to Java. “I’ve not seen very many Java-based desktop applications on Linux, so not sure how big a market this is addressing,” said Arun Gupta, vice president of developer advocacy at Couchbase and a former member of the Java EE team at Sun.

 
[Source:- Javaworld]

 

Java finally gets microservices tools

Java finally gets microservices tools

Lightbend, formerly known as Typesafe, is bringing microservices-based architectures to Java with its Lagom platform.

Due in early March, Lagom is a microservices framework that lightens the burden of developing microservices in Java. Built on the Scala functional language, open source Lagom acts as a development environment for managing microservices. APIs initially are provided for Java services, with Scala to follow.

The framework features Lightbend’s Akka middleware technologies as well as its ConductR microservices deployment tool and Play Web framework. Applications are deployed to Lightbend’s commercial Reactive platform for message-driven applications or via open source Akka.

Lightbend sees microservices as loosely coupled, isolated, single-responsibility services, each owning its own data and easily composed into larger systems. Lagom provides for asynchronous communications and event-sourcing, which is storing the event leading up to particular states in an event, company officials said.

Analyst James Governor of RedMonk sees an opportunity for Lagom. “The Java community needs good tools for creating and managing microservices architectures,” he said. “Lagom is squarely aimed at that space.”

Lagom would compete with the Spring Boot application platform in some areas, according to Governor. “It is early days for Lagom, but the design points make sense,” he noted. Typesafe was focused on Scala, which was adopted in some industries, such as financial services, but never became mainstream, he argues. “So [the company now] is looking to take its experiences and tooling and make them more generally applicable with a Java-first strategy.”

 

[Source:- Javaworld]