Apple released iTunes 12.2 yesterday, along with updates to iOS and OS X. The marquee feature in the new version of iTunes, as well as iOS 8.4, is Apple Music.
But there’s more to iTunes 12.2 than just Apple Music and Beats Radio. Here’s a look at what’s new in the latest update to iTunes.
On the surface
The biggest change to iTunes is the new icon. After changing to red just last year, Apple moves to a multicolored icon. It probably won’t affect the way you use iTunes, but you will have to think twice for a while when you look for its icon in the Dock or application switcher.
The iTunes navigation bar, at the center of the window, has changed. It’s added a few new buttons for Apple Music.
For You brings you to recommended playlists and albums; New shows you, well, new music you can stream; Radio includes iTunes Radio and Beats 1; and Connect is the gateway to Apple Music Connect, the social part of Apple Music.
You can also search for music to stream from the Search field at the top-right of the iTunes window. Click Apple Music to search for tunes to stream, or click My Library to search your own music.
iTunes Radio stations have been demoted. When you click the Radio button in the navigation bar, Apple really wants you to listen to Beats 1 radio. If you created radio stations before, you need to click Recently Played to see those stations, which are mixed in with any Apple radio stations that you’ve listened to.
iTunes 12.2 also brings some minor cosmetic changes, such as colored banners above playlists, and some slightly different buttons, but fortunately, Apple didn’t make any drastic changes to the interface. After all, iTunes 12, which was a major refresh, was only released last fall.
Under the hood
If you choose iTunes > Preferences and click General, you’ll see that there are a couple of options that deal with Apple Music and iCloud Music Library (you only see the latter if you’re signed into your iTunes Store account). If you don’t plan to use Apple Music, you can turn it off here, and you won’t see it in the iTunes navigation bar.
There’s also a new Ratings option. You can now use both traditional star ratings along with the new “Loves,” which are available in the iOS Music app. You can use one or the other, or both.
Love ratings are a good idea. Asking around today, I found that most people only use one rating anyway; generally five stars for the songs they like. I actually use both four- and five-star ratings, the former for songs I like, and the latter for songs I love. But this new rating system makes things a lot simpler for many users.
You can also use Loved as a criterion for smart playlists, so you can make a playlist to group all the songs you’ve applied this rating to. Since you can only apply loves to songs you stream through Apple Music, you may find this to be very useful.
The Store preferences add some options to let you choose whether, following the first time you enter an iTunes Store password, you need to enter the password again. There are settings for purchases and for free downloads:
And if you want to hide Apple Music Connect, go to the Parental pane of the iTunes Preferences, and, in the Disable section, check Apple Music Connect.
Apache Phoenix is a relatively new open source Java project that provides a JDBC driver and SQL access to Hadoop’s NoSQL database: HBase. It was created as an internal project at Salesforce, open sourced on GitHub, and became a top-level Apache project in May 2014. If you have strong SQL programming skills and would like to be able to use them with a powerful NoSQL database, Phoenix could be exactly what you’re looking for!
This installment of Open source Java projects introduces Java developers to Apache Phoenix. Since Phoenix runs on top of HBase, we’ll start with an overview of HBase and how it differs from relational databases. You’ll learn how Phoenix bridges the gap between SQL and NoSQL, and how it’s optimized to efficiently interact with HBase. With those basics out of the way, we’ll spend the remainder of the article learning how to work with Phoenix. You’ll set up and integrate HBase and Phoenix, create a Java application that connects to HBase through Phoenix, and you’ll write your first table, insert data, and run a few queries on it.
HBase: A primer
Apache HBase is a NoSQL database that runs on top of Hadoop as a distributed and scalable big data store. HBase is a column-oriented database that leverages the distributed processing capabilities of the Hadoop Distributed File System (HDFS) and Hadoop’s MapReduce programming paradigm. It was designed to host large tables with billions of rows and potentially millions of columns, all running across a cluster of commodity hardware.
Apache HBase combines the power and scalability of Hadoop with the ability to query for individual records and execute MapReduce processes.
In addition to capabilities inherited from Hadoop, HBase is a powerful database in its own right: it combines real-time queries with the speed of a key/value store, a robust table-scanning strategy for quickly locating records, and it supports batch processing using MapReduce. As such, Apache HBase combines the power and scalability of Hadoop with the ability to query for individual records and execute MapReduce processes.
HBase’s data model
HBase organizes data differently from traditional relational databases, supporting a four-dimensional data model in which each “cell” is represented by four coordinates:
Row key: Each row has a unique row key that is represented internally by a byte array, but does not have any formal data type.
Column family: The data contained in a row is partitioned into column families; each row has the same set of column families, but each column family does not need to maintain the same set of column qualifiers. You can think of column families as being similar to tables in a relational database.
Column qualifier: These are similar to columns in a relational database.
Version: Each column can have a configurable number of versions. If you request the data contained in a column without specifying a version then you receive the latest version, but you can request older versions by specifying a version number.
Figure 1 shows how these four dimensional coordinates are related.
The model in Figure 1 shows that a row is comprised of a row key and an arbitrary number of column families. Each row key is associated to a collection of “rows in tables,” each of which has its own columns. While each table must exist, the columns in tables may be different across rows. Each column family has a set of columns, and each column has a set of versions that map to the actual data in the row.
If we were modeling a person, the row key might be the person’s social security number (to uniquely identify them), and we might have column families like address, employment, education, and so forth. Inside the address column family we might have street, city, state, and zip code columns, and each version might correspond to where the person lived at any given time. The latest version might list the city “Los Angeles,” while the previous version might list “New York.” You can see this example model in Figure 2.
In sum, HBase is a column-oriented database that represents data in a four dimensional model. It is built on top of the Hadoop Distributed File System (HDFS), which partitions data across potentially thousands of commodity machines. Developers using HBase can access data directly by accessing a row key, by scanning across a range of row keys, or by using batch processing via MapReduce.
Bridging the NoSQL gap: Apache Phoenix
Apache Phoenix is a top-level Apache project that provides an SQL interface to HBase, mapping HBase models to a relational database world. Of course, HBase provides its own API and shell for performing functions like scan, get, put, list, and so forth, but more developers are familiar with SQL than NoSQL. The goal of Phoenix is to provide a commonly understood interface for HBase.
In terms of features, Phoenix does the following:
Provides a JDBC driver for interacting with HBase.
Supports much of the ANSI SQL standard.
Supports DDL operations such as CREATE TABLE, DROP TABLE, and ALTER TABLE.
Supports DML operations such as UPSERT and DELETE.
Compiles SQL queries into native HBase scans and then maps the response to JDBC ResultSets.
Supports versioned schemas.
In addition to supporting a vast set of SQL operations, Phoenix is also very high performing. It analyzes SQL queries, breaks them down into multiple HBase scans, and runs them in parallel, using the native API instead of MapReduce processes.
Phoenix uses two strategies–co-processors and custom filters–to bring computations closer to the data:
Co-processors perform operations on the server, which minimizes client/server data transfer.
Custom filters reduce the amount of data returned in a query response from the server, which further reduces the amount of transferred data. Custom filters are used in a few ways:
When executing a query, a custom filter can be used to identify only the essential column families required to satisfy the search.
A skip scan filter uses HBase’s SEEK_NEXT_USING_HINT to quickly navigate from one record to the next, which speeds up point queries.
A custom filter can “salt the data,” meaning that it adds a hash byte at the beginning of row key so that it can quickly locate records.
In sum, Phoenix leverages direct access to HBase APIs, co-processors, and custom filters to give you millisecond-level performance for small datasets and second-level performance for humongous ones. Above all, Phoenix exposes these capabilities to developers via a familiar JDBC and SQL interface.
Get started with Phoenix
In order to use Phoenix, you need to download and install both HBase and Phoenix. You can find the Phoenix download page (and HBase compatibility notes) here.
Download and setup
At the time of this writing, the latest version of Phoenix is 4.6.0 and the download page reads that 4.x is compatible with HBase version 0.98.1+. For my example, I downloaded the latest version of Phoenix that is configured to work with HBase 1.1. You can find it in the folder: phoenix-4.6.0-HBase-1.1/.
Here’s the setup:
Download and decompress this archive and then use one of the recommended mirror pages here to download HBase. For instance, I selected a mirror, navigated into the 1.1.2 folder, and downloaded hbase-1.1.2-bin.tar.gz.
Decompress this file and create an HBASE_HOME environment variable that points to it; for example, I added the following to my ~/.bash_profile file (on Mac): export HBASE_HOME=/Users/shaines/Downloads/hbase-1.1.2.
Integrate Phoenix with HBase
The process to integrate Phoenix into HBase is simple:
Copy the following file from the Phoenix root directory to the HBase lib directory:phoenix-4.6.0-HBase-1.1-server.jar.
Start HBase by executing the following script from HBase’s bin directory:./start-hbase.sh.
With HBase running, test that Phoenix is working by executing the SQLLine console, by executing following command from Phoenix’s bin directory: ./sqlline.py localhost.
The SQLLine console
sqlline.py is a Python script that starts a console that connects to HBase’s Zookeeper address; localhost in this case. You can walk through an example that I am going to summarize in this section here.
First, let’s view all of the tables in HBase by executing !table:
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS |+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+|| SYSTEM | CATALOG | SYSTEM TABLE |||| SYSTEM | FUNCTION | SYSTEM TABLE |||| SYSTEM | SEQUENCE | SYSTEM TABLE |||| SYSTEM | STATS | SYSTEM TABLE ||+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+
Because this is a new instance of HBase the only tables that exist are system tables. You can create a table by executing a create table command:
This command creates a table named test, with an integer primary key named mykeyand a varchar column named mycolumn. Now insert a couple rows by using the upsertcommand:
0: jdbc:phoenix:localhost>upsert into test values (1,'Hello');1 row affected (0.142 seconds)0: jdbc:phoenix:localhost>upsert into test values (2,'World!');1 row affected (0.008 seconds)
UPSERT is an SQL command for inserting a record if it does not exist or updating a record if it does. In this case, we inserted (1,’Hello’) and (2,’World!’). You can find the complete Phoenix command reference here. Finally, query your table to see the values that you upserted by executing select * from test:
The POM file does some housekeeping work next: it sets the source compilation to Java 6, specifies that dependencies should be copied to the target/lib folder during the build, and makes the resulting JAR file executable for the main class,com.geekcap.javaworld.phoenixexample.PhoenixExample.
Listing 2 shows the source code for the PhoenixExample class.
Just like in the shell console, localhost refers to the server running Zookeeper. If you were connecting to a production HBase instance, you would want to use the Zookeeper server name or IP address for that production instance. With ajavax.sql.Connection, the rest of the example is simple JDBC code. The steps are as follows:
Create a Statement for the connection.
Execute a series of statements using the executeUpdate() method.
Create a PreparedStatement to select all the data that we inserted.
Execute the PreparedStatement, retrieve a ResultSet, and iterate over the results.
You can build the project as follows: mvn clean install.
Then execute it with the following command from the target directory:
java -jar phoenix-example-1.0-SNAPSHOT.jar
You should see output like the following (note that I excluded the Log4j warning messages):
You can also verify this from the Phoenix console. First execute a !tables command to view the tables and observe that JAVATEST is there:
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS |+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+|| SYSTEM | CATALOG | SYSTEM TABLE |||| SYSTEM | FUNCTION | SYSTEM TABLE |||| SYSTEM | SEQUENCE | SYSTEM TABLE |||| SYSTEM | STATS | SYSTEM TABLE ||||| JAVATEST | TABLE ||+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+--------------------------+
Finally, query the JAVATEST table to see your data:
Note that if you want to run this example multiple times you will want to drop the table using either the console or by adding the following to the end of Listing 2:
statement.executeUpdate("drop table javatest")
As you can see, using Phoenix is a simple matter of creating JDBC connection and using the JDBC APIs. With this knowledge you should be able to start using Phoenix with more advanced tools like Spring’s JdbcTemplate or any of your other favorite JDBC abstraction libraries!
Apache Phoenix provides an SQL layer on top of Apache HBase that allows you to interact with HBase in a familiar manner. You can leverage the scalability that HBase derives from running on top of HDFS, along with the multi-dimensional data model that HBase provides, and you can do it using familiar SQL syntax. Phoenix also supports high performance by leveraging native HBase APIs rather than MapReduce processes; implementing co-processors to reduce client/server data transfer; and providing custom filters that improve the execution, navigation, and speed of data querying.
Using Phoenix is as simple as adding a JAR file to HBase, adding Phoenix’s JDBC driver to your CLASSPATH, and creating a standard JDBC connection to Phoenix using its JDBC URL. Once you have a JDBC connection, you can use HBase just as you would any other database.
This Open source Java projects tutorial has provided an overview of both HBase and Phoenix, including the specific motivation for developing each of these technologies. You’ve set up and integrated Phoenix and HBase in your local working environment, and learned how to interact with Phoenix using the Phoenix console and through a Java application. With this foundation you should be well prepared to start building applications on top of HBase, using standard SQL.
Docker 1.10, the latest version of the software containerization system, addresses one of its most long-standing criticisms.
Until now, containers have had to run as root under the Docker daemon, with various hair-raising (in)security implications. The solution in Docker 1.10 is a feature called user namespacing. Originally introduced as an experimental feature in version 1.9, it’s now generally available in version 1.10 along with a bundle of other improvements.
A safe space for your name
With user namespaces, privileges for the Docker daemon and container are handled separately, so each container can receive its own user-level privileges. Containers do not need root access on the host, although the Docker daemon still does.
However, Nathan McCauley, director of security at Docker, clarified in an email that user namespaces are currently available only for Linux. “Windows has its own isolation features that we’ll integrate with Docker,” he wrote. “On every platform we’ll aim to support every isolation feature.”
Docker has further expanded on user namespaces by providing a plug-in mechanism for authorization, so admins can configure how Docker deals with user access policies within their organization. Syscalls passed from containers can also be permitted, denied, or traced based on policy settings.
That the Docker runtime runs as root, and the security implicationsa> stemming from that, have long been chief criticisms of Docker. CoreOS was among the most vocal critics, and used the 1.0 release of its rkt container runtime to show how it is possible to run containers without root access. (Rkt can run existing Docker containers as-is.)
Docker has long been aware that having users in containers potentially run as root is problematic, but it took them several point revisions of Docker to mitigate that by way of namespace support and make it stable. CoreOS’s rkt currently hasexperimental support for such a feature.
Re-weaving the net
Docker 1.10 adds major improvements in two other areas. Docker Compose, the native Docker tool for creating multicontainer applications, has a new definition format that now includes ways to describe networks between containers, as supported by Docker’s networking subsystem. This means the work needed to describe a multicontainer application is spread across fewer places.
Networking has also been bolstered. The Docker daemon now includes its own DNS client, which is a way to allow container networks to automatically perform service discovery without the /etc/hosts hackery that was previously required. DNS queries can also be forwarded to an external server if needed.
This goes hand-in-hand with a new internal networking feature that lets containers have network traffic restricted to only their own private subnet by specifying a command-line flag.
Networking has been another long-standing Docker issue and was eventually solved by acquiring and integrating a third-party solution. The latest changes to Docker networking are being touted as a way to take the network topology created for a Dockerized application in development and deploy it in production without changes — addressing another persistent criticism stemming from the limits of Docker’s legacy networking model.
As the Windows team released its Windows 10 Redstone Insider preview build 14279, reports are rolling in that the build brings with it improvements to the Messaging + Skype app for PCs. Windows Insiders are now experiencing a messaging history sync that spans both their mobile devices and PCs. Insiders who have chosen to backup their SMS messaging on their mobile devices (Windows Phone 8 included) are now greeted with their SMS history in the Messaging + Skype app on Windows 10 PCs.
Since the January reveal of Window s 10 back in 2015, some Windows users have been waiting for the converged messaging experience Microsoft so leisurely glossed over during its presentation. The company showcased a slide and spent a hair over two minutes covering the “soon-to-be” integration of Skype plus its crafted messaging app into Windows 10, resulting in a Google Hangouts or iMessage-like experience for Windows users.
Now, with two massive operating system upgrades, countless Insider builds in the can and a year under their belt, it appears the Windows team is drawing ever closer to the messaging offer it once proclaimed.
To test out the new feature, head over to your Update Settings and check for Windows 10 Redstone Insider build preview 14279. Once updated, visit the Messaging + Skype app on both PC and mobile to ensure the backup feature has been enabling. From there, the app should populate automatically.
Unfortunately, it appears to be a one-way street with the functionality of the app. Messages, images, and gif’s can be seen from the PC version of the app, but cannot be sent out. Perhaps another Insider release will help to polish the experience sooner rather than later.
GitHub, under fire by developers for allegedly ignoring requests to improve the code-sharing site, has pledged to fix the issues raised.
Brandon Keepers, GitHub’s head of open source, wrote in a letter today that GitHub is sorry, and he acknowledged it has been slow to respond to frustrations.
“We’re working hard to fix this. Over the next few weeks, we’ll begin releasing a number of improvements to issues, many of which will address the specific concerns raised in the letter,” Keepers said. “But we’re not going to stop there. We’ll continue to focus on issues moving forward by adding new features, responding to feedback, and iterating on the core experience. We’ve also got a few surprises in store.”
More than 450 contributors to open source projects last month had posted a “Dear GitHub” letter on GitHub itself, expressing frustration with its poor response to problems and requests, including a need for custom fields for issues, the lack of a proper voting system for issues, and the need for issues and pull requests to automatically include contribution guideline boilerplate text. “We have no visibility into what has happened with our requests, or whether GitHub is working on them,” the letter said.
Keepers acknowledged that issues have not gotten much attention the past few years, which he called a mistake. He stressed that GitHub has not stopped caring about users and their communities. “However, we know we haven’t communicated that. So in addition to improving issues, we’re also going to kick off a few initiatives that will help give you more insight into what’s on our radar. We want to make sharing feedback with GitHub less of a black box experience and we want to hear your ideas and concerns regularly,” he said.
Comments at GitHub on Keepers’ letter were mostly positive. “Good to see it is at least being acknowledged. Curious to see what the improvements actually will be,” one commenter wrote. Many simply posted a thumbs-up emoji.
Forrester analyst Jeffrey Hammond called the users’ concerns legitimate and warned that GitHub cannot ignore them. “I don’t see [possible defections to other sites] as an immediate, existential risk yet — [this is] more like a shot across the bow,” he said. But “if enough of the community bolts all at once, the transition could be immediate.”
If you’ve ever accidentally deleted a document you saved to iCloud, Apple has a new way for you to restore your data. This new method can be used to restore lost iCloud files, Contacts, or data from Calendar and Reminders.
You’ll find the new restore features when you log into your iCloud.com account and go into Settings. At the bottom of the Setting page, there’s a new Advanced section, with links to Restore Files, Restore Contacts, or Restore Calendars or Reminders.
When you delete a file from iCloud Drive, you have 30 days from the day of deletion to recover it via the Restore Files feature. After 30 days, the file is permanently delete and cannot be recovered. When you restore a file, it reappears in your iCloud Drive.
When using Restore Contacts, you can select which backup archive you want to restore. A restoration will replace the contacts on all of your devices–all your Macs, iPhones, iPads, and iPod touches. Before the restore, a backup of your current contacts is made, so you can revert back to it if needed.
Using Restore Contacts or Reminders is a little more involved. Sharing information is not in any of your archives, so you need to restore sharing privileges manually. Scheduled events get canceled and then recreated, so invitations are resent–youll need to let folks know what’s up with all the event notifications they are being sent. And like with Restore Contacts, a restore replaces the contacts and reminders on all your devices, and an archive of your pre-restore data is made in case you need it.
This fall, Apple will release OS X El Capitan, which is version 10.11 of the Mac operating system. In this FAQ, we’ll answer some of the more general questions about El Capitan to help you decide about installing it on your Mac.
Why is it called El Capitan?
Apple now names OS X after California locations, and the El Capitan name has more meaning than what it seems on the surface. The El Capitan “location” is a 3000-foot monolith of granite found within Yosemite National Park.
As Apple puts it, OS X El Capitan is about “refining the experience and improving performance” of OS X. If you consider that OS X 10.11 is mostly designed to tweak, fix, and add minor features to OS X Yosemite (version 10.10), then the name of OS X 10.11 makes sense.
When will be available?
Apple says that El Capitan will be available on September 30.
What’s the price and how to I get it?
El Capitan is free. It’s available in the Mac App Store.
Will it run on my computer?
El Capitan will work these Macs running OS X Snow Leopard or later:
iMac (Mid 2007 or newer)
MacBook (Late 2008 Aluminum, or Early 2009 or newer)
MacBook Pro (Mid/Late 2007 or newer)
MacBook Air (Late 2008 or newer)
Mac mini (Early 2009 or newer)
Mac Pro (Early 2008 or newer)
The general minimum requirements call for 2GB of memory, 8GB of available storage, and an Internet connection for some features.
What are the new features?
For what’s considered a fine-tune release, El Capitan has a number of new features that make it worth the upgrade. The changes to OS X itself aren’t a lot: Split View, a revamped Mission Control, Spotlight improvements, better support for Chinese and Japanese text, general performance tweaks, and Metal, Apple’s new graphics core technology. Oh, and there’s also the thing where your cursor gets bigger when you shake your mouse so you can spot it.
The major changes are in the apps that come with El Capitan. Safari, Mail, Notes, Maps, and Photos all have new versions.
Read next: Top 10 secret features in Mac OS X El Capitan
New features in the apps? OK. How about them? Start with Safari.
The two main new features in Safari 9 are Pinned Sites, which allow you to “pin” your most frequently visited websites to the tab bar; and tab muting, where you can find the tab playing audio and mute that specific tab. Get more details about the new features in Safari 9.
Sounds nice. What’s new with Mail?
Mail 9 has been revised so that it works better in full-screen mode. There’s also better integration with the Calendar and Contacts app. And you know how iOS Mail has those gestures to handle your emails? Mail 9.0 for Mac has them, too. Get more details about the new features in Mail 9.
Okey dokey. What about Notes?
Notes 4 is a different app from the previous version—it does a lot more. You can now create checklists, and notes can have embedded audio and video. The new Attachments Browser lets you easily spot the photos, video, sketches, map locations and more within your notes. All the data can be access between your Mac and iOS devices. Get more details about the new features in Notes 4.
Good, good. What’s up with Maps?
Maps 2 finally gets public transit information, but this feature won’t be available in many cities when El Capitan is released. This is probably more of a feature you’ll use with iOS, but it’ll be a limited one at the start. Get more details about the new features in Maps.
Next app: Photos. Tell me about it.
Photos hasn’t been out for very long, so the version in El Capitan is version 1.1. It has support for third-party image editing extensions, so you can do more with your photos while in Photos. You can also edit image data and better album sorting options.
What about performance enhancements in the system. What is this “Metal” thing?
You can never have too much speed, huh? Apple says that apps launch 40 percent faster than before, and switching apps is quicker. The company also says that the first mail message in Mail 9 will appear faster, and opening a PDF will have a 4x improvement.
As for Metal, it’s Apple’s name for its graphics core technology. Metal actually made its debut with iOS 8 last year, and now it’s on the Mac. Apple says Metal is 50 percent better at system-level graphics rendering, and that it dramatically improves draw call performance.
In plain English: Metal will improve graphics performance, so your apps and games will look awesome.
Ah, cool. Are there any new security features?
There are. The new System Integrity Protection works against malware by locking down more parts of the core system. Unfortunately, this could break some legitimate software utilities that you use. Get more details about El Capitan’s new security features.
Finally, what about Siri? Is it on the Mac? Can I sit in front of my computer and tell it what to do, like “Find the nearest pizza joint” and it’ll show me the results, and then I can call that place and order a sausage and anchovy pizza? Or, can I, like, sit in front of my Mac, and ask “What does the fox say?” and Siri will reply by saying “Fraka-kaka-kaka-kaka-kow!” and I’ll slap my knee and heartily laugh?
Siri’s not on the Mac, and it won’t happen with El Capitan. You’ll have to order your pizza the old fashioned way. Chacha-chacha-chacha-chow!
A governance model for the technology is under exploration, said Npm founder Isaac Schlueter, CEO of Npm Inc., which currently oversees the project. He hopes the move will expand participation in Npm’s development, as participating today could be awkward because the program is owned and maintained by a single company.
Plans call for completing the move by the end of this year, with Npm Inc. still participating. Other companies are already interested in working with the foundation, Schleuter said, though he would not reveal their names.
Schleuter said that efforts would be made to keep the project on strong footing, adding “what we really don’t want to do is break something that’s working.” There are transparent development processes already in place, he said, and in addition to encouraging more outside participation in NPM’s development, the foundation should ensure the project’s continuity.
Node.js itself was moved to an independent foundation, simply called the Node.js Foundation, last year, after gripes arose with Joyent’s handling of the project and a fork of the technology occurred called io.js. But io.js has since been folded back into Node.js. “I think [developing a governance model] will be a lot easier than it was with Node because the team is more on the same page and there are not as many hurdles to jump through,” Schlueter said.
Npm Inc. runs the open source Npm registry as a free service and will continue to do so after the foundation is formed. The company also offers tools and services to support the secure use of packages in a private or enterprise context.
Microsoft’s bet on its new Windows 10 Mobile keystone feature, Continuum, has yet to materialize fully for the company. As developers timidly wade into the waters of Windows development, with a shrinking smartphone market share and no tablets on the horizon utilizing the Windows 10 Mobile specific version of the operating system, it’s incumbent upon Microsoft to push the idea of Continuum forward.
Until recently, demos of Continuum have been relegated to conferences and special device showcases. Without a public presence or casual awareness, Microsoft’s vision of Continuum is effectively dying on the vine thus far. However, it seems Microsoft is making a strategic pivot in who it sees Continuum being used by.
In a new video, Microsoft pitched the idea of Continuum being a tool to access even more PC-like content in a more mobile package. For a minute and twenty seven seconds, the video guide goes over the use cases and apps available to a Continuum user through the Remote Desktop app.
No longer relegated to Universal Windows Apps specifically tailored for Continuum support, someone owning a Lumia 950 and 950 XL (presumably any Windows 10 Mobile phone with Continuum support) can now access a truly full desktop experience.
You get the power and functionality of the application without physically having to be at the office or in front of a PC. You can access your PCs desktop files, run traditional Windows apps, such as SAP, Photoshop or Autocad.”
For those still hesitant on the concept of Continuum, Microsoft’s new video may help provide a clearer picture of the company’s intentions with the feature. Similar to how Microsoft sold customers on the convergence factor of a Surface device, this new video strikes an eerily similar note using the same language and vocabulary, equipped with the same callouts to ‘full’ Photoshop and AutoCAD.
Microsoft’s Surface Pro 4 was announced during the Windows 10 Devices event in New York City on October 6th, 2015. It features a full array of 6th generation Intel Core m3, i5, and i7 processor models, with 4GB, 8GB, and 16GB of RAM configurations, as well as up to a 1TB of storage.
The new Surface Pro 4 is thinner, down from 9.1mm to 8.4mm. It is also marginally lighter, starting 1.69 lbs (766g) for the M3 model, and 1.73 lbs (786g) for the i5 and i7 models.
The Surface Pro 4 features a larger 12.3-inch display with a higher 2736 x 1824 resolution (267 PPI) than its predecessor. Like all the current generation Surface devices, it carries a 3:2 aspect ratio. Its other dimensions remain unchanged. As such, there is noticeably less bezel on the sides of the screen than before. It’s also worth noting that the new device does not have a Start button.