GitHub apologizes for ignoring community concerns

GitHub apologizes for ignoring community concerns

GitHub, under fire by developers for allegedly ignoring requests to improve the code-sharing site, has pledged to fix the issues raised.

Brandon Keepers, GitHub’s head of open source, wrote in a letter today that GitHub is sorry, and he acknowledged it has been slow to respond to frustrations.

“We’re working hard to fix this. Over the next few weeks, we’ll begin releasing a number of improvements to issues, many of which will address the specific concerns raised in the letter,” Keepers said. “But we’re not going to stop there. We’ll continue to focus on issues moving forward by adding new features, responding to feedback, and iterating on the core experience. We’ve also got a few surprises in store.”

More than 450 contributors to open source projects last month had posted a “Dear GitHub” letter on GitHub itself, expressing frustration with its poor response to problems and requests, including a need for custom fields for issues, the lack of a proper voting system for issues, and the need for issues and pull requests to automatically include contribution guideline boilerplate text. “We have no visibility into what has happened with our requests, or whether GitHub is working on them,” the letter said.

Keepers acknowledged that issues have not gotten much attention the past few years, which he called a mistake. He stressed that GitHub has not stopped caring about users and their communities. “However, we know we haven’t communicated that. So in addition to improving issues, we’re also going to kick off a few initiatives that will help give you more insight into what’s on our radar. We want to make sharing feedback with GitHub less of a black box experience and we want to hear your ideas and concerns regularly,” he said.

Comments at GitHub on Keepers’ letter were mostly positive. “Good to see it is at least being acknowledged. Curious to see what the improvements actually will be,” one commenter wrote. Many simply posted a thumbs-up emoji.

Forrester analyst Jeffrey Hammond called the users’ concerns legitimate and warned that GitHub cannot ignore them. “I don’t see [possible defections to other sites] as an immediate, existential risk yet — [this is] more like a shot across the bow,” he said. But “if enough of the community bolts all at once, the transition could be immediate.”

[Source:- Javaworld]

Npm Inc. explores foundation for JavaScript installer

NPM Inc. explores foundation for JavaScript installer

Npm, the command-line interface that developers run when using the popular Npm package manager for Node.js and JavaScript, will be moved to the jurisdiction of an independent foundation.

A governance model for the technology is under exploration, said Npm founder Isaac Schlueter, CEO of Npm Inc., which currently oversees the project. He hopes the move will expand participation in Npm’s development, as participating today could be awkward because the program is owned and maintained by a single company.

Plans call for completing the move by the end of this year, with Npm Inc. still participating. Other companies are already interested in working with the foundation, Schleuter said, though he would not reveal their names.

The command-line client works with the Npm registry, which features a collection of packages of open source code for Node.js. The Npm system lets developers write bits of code packaged as modules for purposes ranging from database connectors to JavaScript frameworks to command-line tools and CSS tooling.

Enterprise connectivity vendor Equinix, for example, will offer its upcoming AquaJS microservices framework via Npm. Schleuter has called the module system a “killer feature” of Node.js and a big reason for the server-side JavaScript platform’s success. Npm Inc. says there are nearly 242,000 total packages and about 3.5 billion downloads in the past month alone.

Schleuter said that efforts would be made to keep the project on strong footing, adding “what we really don’t want to do is break something that’s working.” There are transparent development processes already in place, he said, and in addition to encouraging more outside participation in NPM’s development, the foundation should ensure the project’s continuity.

Node.js itself was moved to an independent foundation, simply called the Node.js Foundation, last year, after gripes arose with Joyent’s handling of the project and a fork of the technology occurred called io.js. But io.js has since been folded back into Node.js. “I think [developing a governance model] will be a lot easier than it was with Node because the team is more on the same page and there are not as many hurdles to jump through,” Schlueter said.

Npm Inc. runs the open source Npm registry as a free service and will continue to do so after the foundation is formed. The company also offers tools and services to support the secure use of packages in a private or enterprise context.


[Source:- Javaworld]

You’re doing it wrong: 5 common Docker mistakes

You're doing it wrong: 5 common Docker mistakes

The newer the tool, the tougher it is to use correctly. Sometimes nobody — not even the toolmaker itself — knows how to use it right.

As Docker moves from a hyped newcomer to a battle-tested technology, early adopters have developed best practices and ideal setups to get the most out of it. Along the way, they’ve identified what works — and what doesn’t.

Here are five mistakes that come with using Docker, along with some advice on how to steer clear of them.

Using quick-and-dirty hacks to store secrets

“Secrets” cover anything that you would not want outsiders to see — passwords, keys, one-way hashes, and so on. The Docker team has enumerated some of the hacks people use store secrets, including environment variables, tricks with container layers or volumes, and manually built containers.

Many of these are done as quick hacks for the sake of convenience, but they can be quickly enshrined as standard procedure — or, worse, leak private information to the world at large.

Part of the problem stems from Docker not handling these issues natively. A couple of earlier proposals were closed for being too general, but one possibility currently under discussion is creating a pluggable system that can be leveraged by third-party products like Vault.

Keywhiz, another recommended storer of secrets, can be used in conjunction with volumes. Or users can fetch keys using SSH. But using environment variables or other “leaky” methods should be straight out.

Taking the “one process per container” rule as gospel

Running one process per container is a good rule of thumb — it’s in Docker’s own best practices document — but it’s not an absolute law. Another way to think about it is to have one responsibility per container, where all the processes that relate to a given role — Web server, database, and so on — are gathered together because they belong together.

Sometimes that requires having multiple processes in a single container, especially if you need instances of syslog or cron running inside the container. Baseimage-docker was developed to provide a baseline Linux image (and sane defaults) with those services.

If your reason for having a one-process container is to have a lean container, but you still need some kind of caretaker functionality (startup control, logging), Chaperonemight help, as it provides those functions with minimal overhead. It’s not yet recommended for production use, but according to the GitHub page, “if you are currently starting up your container services with Bash scripts, Chaperone is probably a much better choice.”

Ignoring the consequences of caching with Dockerfiles

If images are taking forever to build from Dockerfiles, there’s a good chance misuse or misunderstanding of the build cache is the culprit. Docker provides a few notesabout how the cache behaves, and the folks at detail specific behaviors that can inadvertently invalidate the cache. (ADD, VOLUMES, and RUN commands are the biggest culprits.)

The reverse can also be true: Sometimes, you don’t want the cache to preserve everything, but purging the whole cache is impractical. The folks at CenturyLink have useful notes on when and how to selectively invalidate the cache.

Using Docker when a package manager will do

“Today Docker is usually used to distribute applications instead of just [used] for easier scaling,” says software developer Marc Scholten. “We’re using containers to avoid the downsides of bad package managers.”

If the goal is to simply grab a version of an application and try it out in a disposable form, Docker’s fine for that. But there are times when you really need a package manager. A package manager operates at a lower level of abstraction than a Docker image, provides more granularity, and automatically deals with issues like dependency resolution between packages.

Here and there, work is being done to determine how containers could be used to replace conventional package management altogether. CoreOS, for instance, employs containers as a basic unit of system management. But for now, containers (meaning Docker) are best suited for situations where the real issues are scale and the need to encapsulate multiple versions of apps without side effects.

Building mission-critical infrastructure without laying a foundation first

This ought to be obvious, but it always bears repeating: Docker, like any other tool, works best when used in conjunction with other best practices for creating mission-critical infrastructure. It’s a puzzle piece, not the whole puzzle.

Matt Jaynes of Valdhaus (formerly DevOps University) has noted that he sees “too many folks trying to use Docker prematurely,” without first setting up all the vital details around Docker. “Using and managing [Docker] becomes complex very quickly beyond the tiny examples shown in most articles promoting [it],” says Jaynes.

Automated setup, deployment, and provisioning tools, along with monitoring, least-privilege access, and documentation of the arrangement ought to be in place before Docker is brought in. If that sounds nontrivial, it ought to.


[Source:- Javaworld]

Java 9 to address GTK GUI pains on Linux

Plans are afoot to have Java 9 accommodate the GTK 3 GUI toolkit on Linux systems. The move would bring Java current with the latest version of the toolkit and prevent application failure due to mixing of versions.

The intention, according to a Java enhancement proposal on, would be to support GTK (GIMP Toolkit) 2 by default, with GTK 3 used when indicated by a system property. Java graphical applications based on JavaFX, Swing, or AWT (Advanced Window Toolkit) would be accommodated under the plan, and existing applications could run on Linux without modification with either GTK 2 or 3.

The proposal was sent to the openjfx-dev mailing list by Oracle’s Mark Reinhold, chief architect of the Java platform group at the company, which oversees Java’s development. Java 9 is expected to be available in March 2017.

“There are a number of Java packages that use GTK. These include AWT/Swing, JavaFX, and SWT. SWT has migrated to GTK 3, though there is a system property that can be used to force it to use the older version,” the proposal states. “This mixing of packages using different GTK versions causes application failures.”

The issue particularly affects applications when using the Eclipse development platform. The proposal also notes that while GTK 2 and 3 are now available by default on Linux distributions, this may not always be the case.

Also identified as GTK+, the cross-platform toolkit features widgets and an API and is offered as free software via the GNU Project. It has been used in projects ranging from the Apache OpenOffice office software suite to the Inkscape vector graphics editor to the PyShare image uploader.

An alternative to backing both GTK 2 and 3, according to the Java proposal, would be to migrate Java graphics to support only GTK 3, thus reducing efforts required in porting and testing. But this plan could result in a higher number of bugs not detected by testing, require additional effort with the AWT look and feel, and necessitate both or neither of JavaFX/Swing being ported. Such a port also would require more coordination between AWT and Swing.

But a former Java official at Sun Microsystems questioned the demand for this improvement to Java. “I’ve not seen very many Java-based desktop applications on Linux, so not sure how big a market this is addressing,” said Arun Gupta, vice president of developer advocacy at Couchbase and a former member of the Java EE team at Sun.

[Source:- Javaworld]


Java finally gets microservices tools

Java finally gets microservices tools

Lightbend, formerly known as Typesafe, is bringing microservices-based architectures to Java with its Lagom platform.

Due in early March, Lagom is a microservices framework that lightens the burden of developing microservices in Java. Built on the Scala functional language, open source Lagom acts as a development environment for managing microservices. APIs initially are provided for Java services, with Scala to follow.

The framework features Lightbend’s Akka middleware technologies as well as its ConductR microservices deployment tool and Play Web framework. Applications are deployed to Lightbend’s commercial Reactive platform for message-driven applications or via open source Akka.

Lightbend sees microservices as loosely coupled, isolated, single-responsibility services, each owning its own data and easily composed into larger systems. Lagom provides for asynchronous communications and event-sourcing, which is storing the event leading up to particular states in an event, company officials said.

Analyst James Governor of RedMonk sees an opportunity for Lagom. “The Java community needs good tools for creating and managing microservices architectures,” he said. “Lagom is squarely aimed at that space.”

Lagom would compete with the Spring Boot application platform in some areas, according to Governor. “It is early days for Lagom, but the design points make sense,” he noted. Typesafe was focused on Scala, which was adopted in some industries, such as financial services, but never became mainstream, he argues. “So [the company now] is looking to take its experiences and tooling and make them more generally applicable with a Java-first strategy.”


[Source:- Javaworld]

GitLab 8.5 pours on the speed

GitLab 8.5 pours on the speed

GitLab this week upgraded its code-hosting platform, emphasizing performance and adding to-do list and remote replica capabilities.

GitLab 8.5 is lot faster, said Job van der Voort, GitLab vice president of product at the company, in a posting about the upgrade. “Average mean performance is up at least 1.4 times, up to 1.6 times for 99th percentile response times. For slower pages, the response time has been improved way beyond this.”

The new version features Todos, a chronological list of to-dos. “Whenever you’re assigned to an issue or merge request or have someone mention you, a new to-do is created automatically,” said van der Voort. GitLab 8.5 Enterprise, meanwhile, features an alpha version of Geo, providing for a remote replica of a Geo instance. Geo makes it quicker to work with large repositories over large distances, and this instance can be used for cloning and fetching projects as well as for reading data.

The GitLab Pages feature for hosting a static website under a separate domain name now backs TLS (Transport Layer Security) certificates and custom domains, and users can upload their own certificates. “The new functionality of GitLab Pages was made possible with the help of a new HTTP server written in Go,” van der Voort said. “We call it the GitLab Pages daemon GitLab Pages daemon, and it supports dynamic certificates through SNI and exposes pages using HTTP2 by default.”

GitLab is vying against juggernaut code-sharing site GitHub in the Git repository market. The latest upgrade follows the release of Gitlab 8.4, the 50th release of the platform, by about a month.


[Source:- Javaworld]