Google lets three enterprise cloud databases loose

Google lets three enterprise cloud databases loose

Google has made three new enterprise database offerings generally available, hoping to lure customers currently on Amazon Web Services and Microsoft’s Azure platforms over to its Compute Engine service.

The three offerings include the fully managed Cloud SQL Second Generation with MySQL instances, the Cloud Bigtable noSQL wide-column service with Apache HBase compability, and the Cloud Datastore, a scalable, NoSQL document database.

Pricing for Cloud SQL 2nd Generation starts at US$0.015 per hour for 0.6 gigabytes of memory, shared virtual processor, and maximum 3TB capacity for the smallest, db-f1-micro instance.

This goes up to US$2.012 per hour for the db-n-highmem-16 instance, with 16 VCPUs, 104GB of RAM and up to 10TB of storage. In addition, Google charges US$0.17 per GB and month for storage capacity, and US$0.08 per GB and month for backups.

Bigtable nodes cost US$0.65 per node and hour, with a minimum of three required per cluster. Each node can delivery up to 10,000 queries per second and 10 Mbps data transfers.

Storage for Bigtable on solid state disks is charged at US$0.17 per GB and month, with the hard drive equivalent service costing US$0.026 per GB and month. Australian customers pay US$0.19 per GB for up to 1TB of internet egress traffic, which drops to US$0.18/GB for 1 to 10TB, and US$0.15/GB for more than 10TB.

Cloud Datastore is free for up to 1GB of storage, 50,000/20,000/20,000 entity reads/writes/deletes, with additional charges once those limits are reached.

Customers wanting to run their own databases on the Google Compute Engine can now use Microsoft SQL Service images with built-in licenses. Business can also use their own, existing application licenses.

Google claimed that its Cloud SQL 2nd Gen database provides substantially better performance than Amazon’s RDS MSQL Multi-Availability Zone and RDS Aurora databases – up to 16 concurrent threads, as measured with the Sysbench benchmark.

Beyond 16 concurrent threads the AWS databases were slightly better than Cloud SQL 2nd Gen. In terms of transactions per second, Sysbench testing showed AWS Aurora to be the leader beyond 16 concurrent threads.

Some of the performance difference is due to design decisions for the databases: Google’s SQL 2nd Gen emphasises performance and allows for replication lag which can increase failover times albeit won’t put data at risk, Google said.

AWS Aurora, meanwhile, is designed with replication technology that exhibits minimal performance variation and consistent lag.

Google also said the Cloud SQL 2nd Gen replicated database had about half the end-to-end latency for single client threads compared to AWS RDS for MySQL Multi-Availability Zone, at 32.02ms – substantially better than the 70.12ms measured for AWS RDS Aurora.

Source:-.itnews.

Real-time MySQL Performance Monitoring

Image result for mssqlA key part of keeping your MySQL database running smoothly is the regular monitoring of performance metrics.  In fact, there are literally hundreds of metrics that can be measured that can enable you to gain real-time insight into your database’s health and performance.  To this end, several MySQL monitoring tools have been developed to make performance monitoring easier.  In today’s article we’ll use Monyog to measure a few of the more important metrics.

Getting Started

As I said in the introduction, there are quite a few performance monitoring tools for MySQL to choose from, but the one I use is called Monyog.  Developed by Webyog, Monyog not only monitors one or more MySQL servers, it also advises you on how to tune the databases, find problems, and then fix them before they can become serious problems or costly outages.

Monyog utilizes “agent-less monitoring”, thus eliminating the need to install and maintain monitoring agents, which can be a complex administration task by itself. Instead, Monyog uses a normal MySQL connection for monitoring MySQL. To collect OS data from remote servers, Monyog uses SSH for Linux. This means Monyog can collect all monitoring data by using remote connections. This is a huge advantage that sets Monyog apart from all other MySQL monitoring and advisory tools because it doesn’t force you to install any components on your MySQL servers, making it totally unobtrusive.  It utilizes no CPU cycles or memory on servers, leaving them free to do what they were meant to do.  Sometimes you don’t have admin rights to the server box.

To install Monyog, simply navigate to the product page on the Webyog site, and click the “Download free trial” button.  Monyog is supported on Microsoft Windows (2003 and higher) and on Linux (installers are based on the [originally Red Hat] .RPM standards along with a .tar package for Ubuntu and Debian systems). Keep in mind that “supported platforms” only refers to the platforms on which Monyog itself must be installed and not the platform that MySQL is installed on.  If there is a MySQL server running, Monyog can connect to it!

Here are the Windows installation instructions.

Here are the Linux installation instructions.

Monitoring Performance

Monyog comes with 600+ pre-configured monitors and advisors. With so many measurable metrics to choose from, let’s narrow down the field to cover a couple of areas of performance and resource utilization:

  • Query execution performance
  • Connections

In the next sections, I’ll show you how to obtain the above metrics using Monyog’s simple and intuitive UI.

Query Execution Performance

In my 18-or-so years of supporting mission critical applications, the number one database-related complaint from users was that queries are running too slowly.  A database’s work is running queries, so your first monitoring priority should be making sure that MySQL is executing queries as expected.

Monyog’s simple UI gives us the ability to monitor more servers with a clean interface, either from log files or via Real-time monitoring.  Since this article is on Real-time monitoring, that’s what we’ll do here.  Note that Real-time monitoring is best for short bursts of debugging because it does place some overhead on your database server(s).  To show performance metrics:

  1. Click the (Real-time) Clock icon on the left-hand side of the screen.
  2. On the next screen:
    1. Select a server to monitor.
    2. You may then choose to start a new session or load a saved one.  ( I already saved one )

Here is the Monyog real-time query monitoring screen:

Real-time Query Monitoring Screen
Real-time Query Monitoring Screen

Monyog’s Query Analyzer feature is one of my favorites because it’s simple UI helps me identify potential bottlenecks quickly and easily.  The Average Latency (shown above) provides the time taken by each query to execute.

If you’re not happy with a query’s performance, you can run the EXPLAIN command by clicking on the query and select the Explain tab on the Query Details screen:

Query Details screen
Query Details screen

Connections

In MySQL, the global variable max_connections determines the maximum number of concurrent connections to MySQL. Generally, you’ll want to make sure that this variable is set to a high enough value to support your user base. Moreover, it’s important to design your applications in such a way that a MySQL connection is kept open for a very short period of time.  You may also want to try pooling connections or switch to persistent connections, for example, by using mysql_pconnect() instead of mysql_connect(). Both these actions will help reduce the number of active MySQL connections.

Monyog reports the max_connections variable as the “Max allowed” metric on the Current Connections screen.  It also divides that figure by the number of open connections to produce the Connection usage as a percentage:

Current Connections Screen
Current Connections Screen

In addition to monitors, Monyog also supplies hundreds of advisors that automatically examine MySQL server’s configuration, security, and performance levels.  These identify problems and provide the MySQL DBA with specific corrective actions.   Here on the Connection History screen, we can see that the RDS –Dev server’s Percentage of max allowed reached connections is getting dangerously high:

Connection History Screen
Connection History Screen

Clicking on the monitor name brings up the Monitor Details screen.  It contains many useful details about the monitor, including Advice text:

Monitor Details Screen
Monitor Details Screen

Pricing

Monyog is available in three flavors: Professional, Enterprise, and Ultimate, all of which come with an unconditional 90-day money back guarantee and free upgrades for 1 year.  The cost of a perpetual license for one MySQL server with premium support runs $199 for Professional, $299 for Enterprise, and $399 for Ultimate.  From there, additional licenses may be purchased at discounted rates:

  • 2  MySQL Servers – up to 17% off
  • 5  MySQL Servers – up to 25% off
  • 10 MySQL Servers – up to 38% off
  • 25 MySQL Servers – up to 50% off
  • 50 MySQL Servers – up to 55% off

All of the above packages are also available without premium support, lowering the cost to $139 for Professional, $199 for Enterprise, and $299 for Ultimate.

Visit the Monyog website for more information on pricing.

Conclusion

In this article we utilized Monyog to measure a couple of the more important database metrics.  In doing so, we saw how a professional grade monitoring tool can help keep our MySQL database(s) running smoothly.  In a future article, we’ll examine some other important metrics.
[“Source-databasejournal”]

MySQL Tops Database Rankings

Image result for MySQL Tops Database Rankings

MySQL remains the world’s most popular open source database while MySQL skills are by far the most in-demand among recruiters, according to the latest rankings of popular databases.

The Stackshare Database Index ranked the top ten databases by the number of technology stacks containing each data platform. It also ranked the most sought-after database skills.

As of June 2017, MySQL ranked first in both categories, listed in 5,270 technology stacks while there were 1,450 job openings for developers with MySQL skills.

The Structured Query Language itself was most often cited as the reason for implementing MySQL databases along with ease of use and a preference for open source approaches and cross-platform support.

The Redis in-memory database ranked second, showing up on 4,080 technology stacks. Among the top ten databases, Redis ranked highest in terms of performance, including nearly 500 developers who rated it “super fast.” Proponents also cited ease of use and in-memory caching.

Meanwhile, Redis developer skills ranked fourth among recruiters.

The PostgreSQL object-relational database came in third in terms of number of stacks in which it runs. Backers most often cited relational database attributes along with high availability, making it an “enterprise class database.”

The PostgreSQL database also cracked the top five gauged in terms of sought-after developer skills.

MongoDB (3,770 stacks) and Amazon Simple Storage Service (3,530 stacks) rounded out the top five databases on the StackShare index.

Meanwhile, Amazon’s (NASDAQ: AMZN) Relational Database Service ranked second on Stackshare’s list of in-demand database skills with 833 job listings. It was followed closely by Hadoop (850), which was credited with a “great ecosystem.”

The database index nevertheless confirms flagging developer enthusiasm for Hadoop, which ranked a distant sixteenth among data store tools and service incorporated into software stacks.

MySQL momentum also was reflected in the growing popularity of related databases such as MariaDB, designed as a “drop-in replacement” for MySQL. MariaDB was lauded for its stability along with “more features, new storage engines, fewer bugs and better performance.”

A growing list of Linux-based platforms are now using MariaDB as their default MySQL replacement, including Red Hat Enterprise Linux (NYSE: RHT), SUSE Enterprise, CentOS and the latest version of a “universal” open-source operating system called Debian.

MariaDB turned up in more than 700 software stacks tracked by Stackshare, ranking a notch below No. 10 Microsoft (NASDAQ: MSFT) SQL Server (832 stacks).

Stackshare is an online community that provides access to software tools and cloud infrastructure services as well as side-by-side comparisons of software tools. It also allows companies ranging from Dropbox to Spotify to share their software stack with prospective developers. The online community claims more than 100,000 members, including CTOs and developers.

[“Source-datanami”]

New Tool Identifies MySQL as Top Database in Software Stacks

MySQL is the most popular database, according to a new tool that ranks various development tools according to their usage in the technology stacks used by various companies.

Three-year-old start-up StackShare Inc. launched its ranking tools this month, including the “StackShare Data Stores Index,” which shows MySQL leading the back by virtue of its inclusion in some 5,480 stacks.

The company says it’s building a LinkedIn for the Software-as-a-Service (SaaS) arena, with an initial focus on tools used for software development.

Ranking tools are also set up for Application Hosting, Languages & Frameworks and more. Furthermore, developers wishing to explore their own topics of interest can go to the categories page to find more areas under applications and data, utilities, DevOps and Business Tools, all with various sub-categories.

Each category ‘ provides a list of the top offerings ranked by the number of stack and a separate ranking based on the number of active job listings including a tool.

“Essentially these pages serve as a central place that lets users instantly view the top ranking tools in a particular category,” a company spokesperson told ADTMag. “The tools are ranked based on user reviews/feedback/number of stacks, integrations and jobs. This is useful to CTOs and engineers when they are gauging what tools/services will work best for their teams/projects.”

Top Data Stores
[Click on image for larger view.]Top Data Stores (source: StackShare)

Going to the company’s Web ‘ brings up a list of Trending items, showing “What’s hot across StackShare today” (which, today, is Vue.js, followed by Visual Studio and ES6).

“StackShare is the fastest growing community for SaaS tools — we show you all the software a company is using and why,” company literature says. “We’re a developer-only community of engineers, CTOs, and VPEs from some of the world’s top startups. Engineers use StackShare to compare and discover new technologies, while companies use StackShare to connect with engineers. We’re building LinkedIn for the $150B SaaS industry, starting with dev tools.”

Having begun as a side project called Leanstack, StackShare was founded in 2014. A 2013 introductory blog post from when the company was still called Leanstack explains the origin of the project:

Leanstack helps you keep up with the latest and greatest developer services. We show you: which services the most innovative companies in the world are using; let you learn about those services and how they can help you; and send you updates on new services you may be interested in. All in an effort to help make your stack “leaner” and thus, helping to make your company more successful.

Now, the company claims it has data on some 7,000 companies and has attracted more than 150,000 developers to use its services.

Here’s a snapshot of the top entries in some selected ever-changing rankings as they appeared today:

Category Top tool
Application Hosting Tools and Services nginx
Languages and Frameworks Bootstrap
Assets and Media Amazon CloudFront
Libraries jQuery
DevOps GitHub
Mobile Utilities jQuery Mobile
Analytics Utilities Google Analytics
Collaboration Business Tools Google Apps

About the Author

David Ramel is an editor and writer for 1105 Media.

[“Source-adtmag”]

Google lets three enterprise cloud databases loose

Google lets three enterprise cloud databases loose

Promises better performance than AWS.

Google has made three new enterprise database offerings generally available, hoping to lure customers currently on Amazon Web Services and Microsoft’s Azure platforms over to its Compute Engine service.

The three offerings include the fully managed Cloud SQL Second Generation with MySQL instances, the Cloud Bigtable noSQL wide-column service with Apache HBase compability, and the Cloud Datastore, a scalable, NoSQL document database.

Pricing for Cloud SQL 2nd Generation starts at US$0.015 per hour for 0.6 gigabytes of memory, shared virtual processor, and maximum 3TB capacity for the smallest, db-f1-micro instance.

This goes up to US$2.012 per hour for the db-n-highmem-16 instance, with 16 VCPUs, 104GB of RAM and up to 10TB of storage. In addition, Google charges US$0.17 per GB and month for storage capacity, and US$0.08 per GB and month for backups.

Bigtable nodes cost US$0.65 per node and hour, with a minimum of three required per cluster. Each node can delivery up to 10,000 queries per second and 10 Mbps data transfers.

Storage for Bigtable on solid state disks is charged at US$0.17 per GB and month, with the hard drive equivalent service costing US$0.026 per GB and month. Australian customers pay US$0.19 per GB for up to 1TB of internet egress traffic, which drops to US$0.18/GB for 1 to 10TB, and US$0.15/GB for more than 10TB.

Cloud Datastore is free for up to 1GB of storage, 50,000/20,000/20,000 entity reads/writes/deletes, with additional charges once those limits are reached.

Customers wanting to run their own databases on the Google Compute Engine can now use Microsoft SQL Service images with built-in licenses. Business can also use their own, existing application licenses.

Google claimed that its Cloud SQL 2nd Gen database provides substantially better performance than Amazon’s RDS MSQL Multi-Availability Zone and RDS Aurora databases – up to 16 concurrent threads, as measured with the Sysbench benchmark.

Beyond 16 concurrent threads the AWS databases were slightly better than Cloud SQL 2nd Gen. In terms of transactions per second, Sysbench testing showed AWS Aurora to be the leader beyond 16 concurrent threads.

Some of the performance difference is due to design decisions for the databases: Google’s SQL 2nd Gen emphasises performance and allows for replication lag which can increase failover times albeit won’t put data at risk, Google said.

AWS Aurora, meanwhile, is designed with replication technology that exhibits minimal performance variation and consistent lag.

Google also said the Cloud SQL 2nd Gen replicated database had about half the end-to-end latency for single client threads compared to AWS RDS for MySQL Multi-Availability Zone, at 32.02ms – substantially better than the 70.12ms measured for AWS RDS Aurora.

[“Source-itnews”]

Team MySQL v Team PostgreSQL: These companies are betting on them

databases

© Shutterstock / Nivens

There’s no one-size-fits-all solution when it comes to choosing databases. Every database is suitable for certain projects and requirements but the fight seems to be between MySQL and PostgreSQL — these are the databases used by giants such as GitHub, Reddit, Airbnb, Spotify and more. Which team are you on?

Top databases in 2017

Will this article simply your database decision? Sadly, no — it’s entirely up to you. However, if you want to know which databases are gaining momentum this year, you should know that the answer lies in our annual JAXenter survey.

Survey respondents have decided: PostgreSQL is the winner. 25,3 percent found it “very interesting” and 37,7 percent found it “interesting”. In total, PostgreSQL managed to get 63 percent of the respondents excited about the prospect of using it this year.

The runner-up is Elasticsearch with a total of 59 percent. It seems that the student has become the master; although Elasticsearch is based on Lucene, the latter didn’t manage to convince as many respondents to give it a try in 2017. The combination Lucene/Solr only grabbed the attention of 43,8 percent of the respondents — it’s definitely a high score but not necessarily compared to Elasticsearch’s result.

A similar shift can be seen in the case of data processing Apache Spark and Hadoop. Survey respondents’ interest in Hadoop (34,8 percent) stands no chance against people’s interest in Apache Spark (53,3 percent).

It appears that we have a lot of “drama” in this part of the evaluation. In addition to a couple of cases of “student surpasses master”, we also have a small “fight” between a few NoSQL databases: MongoDBCassandraRedisNeo4J. In-memory data grid Hazelcast has managed to outshine both CouchDB and the classic: OracleMicrosoft SQL Server seems to be the outcast this year.

JAXenter technology trends survey results

One thing is clear: data storage and processing are once again in the public eye. Today, the endless possibilities one has with data storage and processing are becoming not only necessary but also “fashionable” (a.k.a. in great demand). Case in point: some of the greatest companies are betting on databases such as MySQL or PostgreSQL. Let’s see who’s on team MySQL and who’s on team PostgreSQL.

SEE ALSO: 4 best open source databases you should consider using for your next project

Team MySQL

We used StackShare, a software discovery platform that allows developers to find and use software tools to look at the software stacks of some of the world’s most popular companies. You can have a look for yourself if you want to see what tools and services your favorite startup is using.

GitHub (see stack here)

Airbnb (see stack here). They also give Hadoop some love.

Yelp (see stack here). They also give Hadoop some love.

Coursera (see stack here). They also give Cassandra some love.

Ask.fm (see stack here)

9GAG (see stack here). They also give Hadoop and Memcached some love.

Trivago (see stack here).They also give Hadoop and Memcached some love.

Freelancer.com (see stack here)

Team PostgreSQL

Reddit (see stack here).They also give Cassandra and Memcached some love.

Spotify (see stack here).They also give Cassandra and Hadoop some love.

Zalando (see stack here).They also give Cassandra and Hadoop some love.

DuckDuckGo (see stack here)

Travis CI (see stack here)

Which team are you on? Tell us in the comments section.

2017 vs 2016: What changed?

Back to our survey for a second. Although there aren’t any massive changes between the databases readers prefer this year vs. the ones they preferred in 2016,  it’s worth mentioning that Redis has gathered more points this year than it did in 2016 (34 percent last year and 43.2 percent in 2017) while MongoDB experienced the opposite change: it lost some points in the meantime (60 percent in 2016 and 49.8 percent this year).

JAXenter annual survey: 2017 v 2016

The million-dollar question is: What’s going to happen next year in the land of databases? Thoughts?

How to Create WordPress MySQL Databases on cPanel

Filing room

This article is part of a series created in partnership with SiteGround. Thank you for supporting the partners who make SitePoint possible.

WordPress’ success owes much to its quick and simple five-minute installation procedure. Yet the MySQL database still causes confusion for many.

This tutorial describes how to create a database using cPanel, a popular platform management utility offered by many web hosts. We’ll also discuss how to use this database during a WordPress installation. The techniques can be used by any web application which requires MySQL.

Let’s start with the basics and terminology…

What is a Database?

A database is a collection of organized data. That’s it. WordPress stores all its page, post, category and user data in a database.

MySQL is a database management system (DBMS). It is software which allows you to create, update, read and delete data within a database. A single MySQL installation can manage any number of self-contained databases. You could have one for WordPress, another for Magento, and others for Drupal or whatever you need.

There are plenty of alternatives but MySQL became popular for several reasons:

  • it is free, open source software. It is now owned by Oracle but there are open MySQL-compatible options such as MariaDB.
  • it became synonymous with PHP – the web’s most-used language/runtime which powers WordPress. Both PHP and MySQL appeared in the mid-1990s when web development was in its infancy.
  • it adopts Structured Query Language (SQL) – a (fairly) standard language for creating data structures and data.
  • it is fast, simple to install and has many third-party development tools.

How do Applications Access a Database?

Applications such as WordPress access their data via a database connection. In the case of MySQL, WordPress’ PHP code can only establish a connection when it knows:

  • the address where MySQL is installed
  • the name of the database it needs to access
  • a user ID and password required to access that database

A database “user” account must be defined for WordPress use. It can have a very strong password and set appropriate database permissions.

How is Data Stored?

MySQL and other SQL databases store data in relational tables.

For example, you may have a set of article posts. Each post will have unique data, such as the title and body text. It will also have data used in other posts, such as the category and author details. Rather than repeat the same data again and again, we create separate tables:

  • an author table containing an ID, the author’s name and other details
  • category table containing an ID and the category name
  • post table containing the article title and body text. It would point to the author and category by referencing the associated ID number.

SQL databases implement safeguards to guarantee data integrity. You should not be able to reference an author ID which does not exist or delete a category used by one or more articles.

These table definitions and rules form a database schema. A set of SQL commands execute during WordPress installation to create this schema. Only then are the tables ready to store data.

How to Create a Database

Web hosts using cPanel provide a web address (such as https://site.com/cpanel), and a user ID and password to gain access. Keep these details safe. Do not confuse them with the database or WordPress user credentials!

If you’re looking for a host that supports cPanel, try SiteGround, our web host of choice. All plans support cPanel, and they’ve re-skinned the dashboard to organize everything in a more friendly way.

cPanel

You view may look a little different but locate the DATABASES section or enter “MySQL” in the search box.

Click the MySQL Database Wizard and follow the steps:

Step 1: Choose a Database Name

Your database requires a name:

create a database

The name may have a prefix applied, such as mysite_. Enter an appropriate name such as blog or wordpress and hit Next Step.

Step 2: Create a Database User

You must now define the MySQL user account which WordPress uses to access your database:

create database user

Note the user name may also have the same prefix applied (mysite_). In this screenshot, our user ID is mysite_blogDBuser.

cPanel will ensure you enter a strong password. The password can be complex; you will use it only once during WordPress installation. I recommend the random Password Generator:

password generator

Make sure you copy the user ID and password to a text file or another safe place before hitting Create User.

Step 3: Set the Database User Privileges

The user created above requires full access to the database during WordPress installation. It runs scripts to create tables and populate them with the initial data.

Check ALL PRIVLEGES then hit Next Step:

set database user privileges

cPanel will confirm creation of the MySQL database and user.

enter image description here

The MySQL Databases Panel

You can use the MySQL Databases panel instead of the wizard. It still allows you to create a database and user, but you then add that user to the database.

It also provides facilities to update, repair and delete databases and users.

How to Install WordPress

Your cPanel may provide WordPress and other application installers. It may not be necessary to follow the steps above because the script creates a database for you.

If manual installation is necessary or preferred, download WordPress and extract the files. You may be able to do this on your server via SSH but FTP/SFTP is also supported.

Open a browser and navigate to the domain/path where you copied WordPress, (i.e. http://mysite.com/). This starts the installation:

install WordPress

You must enter:

  • the MySQL Database Name created in step 1
  • the MySQL database Username created in step 2
  • the MySQL database user’s Password created in step 2
  • the Database Host. This is the address of the server where MySQL runs. It will often be localhost or 127.0.0.1 because MySQL is running on the same server where your site is hosted. Your host will advise you if this is different.

The Table Prefix adds a short string to the start of all table names. Change it when:

  1. You want to install many copies of WordPress which all point to the same database, and/or
  2. You want to make your installation a little more secure by making table names less obvious.

Hit Submit and WordPress will verify your credentials before continuing installation.

Create a WordPress User

WordPress prompts for the ID, password and email address of a WordPress administrator. This is someone responsible for managing WordPress. It is different to the MySQL database and cPanel credentials!

enter image description here

Hit Install WordPress and the dashboard will appear within a few seconds.

Bonus Security Step

We granted full permission to the database user for WordPress installation. You can downgrade these privileges after installation to improve security.

The following rights should be adequate:

  • SELECT
  • INSERT
  • UPDATE
  • DELETE
  • ALTER
  • CREATE TABLE
  • DROP TABLE
  • INDEX

Some plug-ins may need extra rights so enable ALL PRIVILEGES if you encounter problems.

[“Source-sitepoint”]

The sought after Linux professional

There’s no such thing as “just a Linux sysadmin,” which is what makes Linux professionals so incredibly valuable. We’ve all been hearing that the demand for Linux professionals is “at its highest ever!!!” for years. In recent years, though, it hasn’t just been Linux nuts like me saying it. You may reference the 2014 Linux Jobs Report by The Linux Foundation and assume they’re biased, but a quick search over at Monster.com shows that the demand for Linux professionals is a real thing.

Linux has been around for decades, so why the sudden interest?

Flexibility.

Sure, I mean Linux is flexible, but more than that, Linux System Administrators are flexible. It’s not news to anyone that Linux is gaining popularity in part due to its dominance in the cloud and the datacenter. And certainly that large install base needs sysadmins who understand Linux and how it works. More importantly, however, companies need sysadmins who can make those cloud based services work with their particular internal needs.

If you need someone to integrate your homegrown database system with a cloud based Linux infrastructure, you need a Linux professional. Take my personal experience when transitioning from a Linux-centric server room to a Microsoft dominated company. My certifications are strictly network and Linux-based (specifically CCNA & LPIC/Linux+). Still, I was confident applying for a management position in a database department that used 100% Microsoft SQL, even though I’d never touched MSQL in my life. And I never claimed to do so in my interviews, because I understand conceptually what needs to be done. I have first-hand experience with integrating various operating systems, so learning the nuances of Microsoft-specific procedures didn’t worry me at all.

I got the job, and after a year I can assure you my lack of first-hand experience didn’t affect my ability to lead a team or make technical decisions. My point? Linux users tend to be a cut above the rest, not because they’re inherently smarter or better, but because Linux requires you to understand what you’re doing on a level that’s not required with Windows. That conceptual understanding is invaluable, and interviewers know it. As Linux users and pros, we’ve been learning to integrate into heterogenous environments our entire careers. It’s easy to find a strictly Microsoft shop, but 100% Linux? That’s almost unheard of. That means as Linux administrators, we have been forced to understand multiple systems in order to do the simplest of tasks. Think about it, every Linux user in the world would be able to configure a network connection in Windows 7. If they didn’t know how, it would be really easy to figure out. Then, take a Windows administrator and ask them to set up a static IP on a Debian server? That’s far less common.

What makes Linux professionals valuable

In order to fill those desperately needed Senior Administrator positions, Linux folks need to have a firm grasp of what Linux can and can’t do. Is scaling to the cloud a wise move? Will database latency cause transaction errors if queries take place over the Internet? Can we use a cloud service like Amazon, or do we have to use Azure due to Microsoft specific code?

In order to answer those tough questions, not only must a sysadmin be comfortable in their area of expertise, but they must have understanding and experience in cross-platform solutions. Like I pointed out earlier, this pretty much describes what it means to be a Linux professional! Nobody likes hiring or even working with an arrogant Linux zealot. Unfortunately, it’s easy to get an air of superiority. The key to being hirable (and not being a jerk) is to turn that arrogance into fearlessness. Don’t call your potential employer stupid for implementing a Microsoft virtualization platform, tell them how excited you are to get your hands on it so you can learn what advantages and disadvantages it offers.

Who knows, maybe you’ll eventually replace their entire system with open source—but you won’t even have the chance if you start off by insulting them.

What if you have not been using Linux your entire career? What if you haven’t had a career yet at all? That’s the beauty of open source. Nothing, I repeat, nothing about working with Linux is a secret. By design, every bit of information is available freely on the Internet. You can download multiple distributions, countless open source applications, and enough documentation to make your eyes cross. All for free. Certainly there are advantages to professional training when it comes to learning Linux, but not because trainers have access to anything not already available to anyone.

Linux and open source software, coupled with the Internet, have leveled the playing field when it comes to learning and growing as a professional. I’m a Linux professional today because in my early 20s I couldn’t afford to study anything else. Today, I couldn’t be happier with those humble beginnings. Linux has changed my life, and if the studies and job searches are any indication, it can change yours too.

[“Source-opensource”]

PHP at 20: From pet project to powerhouse

PHP at 20: From pet project to powerhouse

When Rasmus Lerdorf released “a set of small tight CGI binaries written in C,” he had no idea how much his creation would impact Web development. Delivering the opening keynote at this year’s SunshinePHP conference in Miami, Lerdorf quipped, “In 1995, I thought I had unleashed a C API upon the Web. Obviously, that’s not what happened, or we’d all be C programmers.”

In fact, when Lerdorf released version 1.0 of Personal Home Page Tools — as PHP was then known — the Web was very young. HTML 2.0 would not be published until November of that year, and HTTP/1.0 not until May the following year. NCSA HTTPd was the most widely deployed Web server, and Netscape Navigator was the most popular Web browser, with Internet Explorer 1.0 to arrive in August. In other words, PHP’s beginnings coincided with the eve of the browser wars.

Those early days speak volumes about PHP’s impact on Web development. Back then, our options were limited when it came to server-side processing for Web apps. PHP stepped in to fill our need for a tool that would enable us to do dynamic things on the Web. That practical flexibility captured our imaginations, and PHP has since grown up with the Web. Now powering more than 80 percent of the Web, PHP has matured into a scripting language that is especially suited to solve the Web problem. Its unique pedigree tells a story of pragmatism over theory and problem solving over purity.

The Web glue we got hooked on

PHP didn’t start out as a language, and this is clear from its design — or lack thereof, as detractors point out. It began as an API to help Web developers access lower-level C libraries. The first version was a small CGI binary that provided form-processing functionality with access to request parameters and the mSQL database. And its facility with a Web app’s database would prove key in sparking our interest in PHP and PHP’s subsequent ascendancy.

By version 2 — aka PHP/FI — database support had expanded to include PostgreSQL, MySQL, Oracle, Sybase, and more. It supported these databases by wrapping their C libraries, making them a part of the PHP binary. PHP/FI could also wrap the GD library to create and manipulate GIF images. It could be run as an Apache module or compiled with FastCGI support, and it introduced the PHP script language with support for variables, arrays, language constructs, and functions. For many of us working on the Web at that time, PHP was the kind of glue we’d been looking for.

As PHP folded in more and more programming language features, morphing into version 3 and onward, it never lost this gluelike aspect. Through repositories like PECL (PHP Extension Community Library), PHP could tie together libraries and expose their functionality to the PHP layer. This capacity to bring together components became a significant facet of the beauty of PHP, though it was not limited to its source code.

The Web as a community of coders

PHP’s lasting impact on Web development isn’t limited to what can be done with the language itself. How PHP work is done and who participates — these too are important parts of PHP’s legacy.

As early as 1997, PHP user groups began forming. One of the earliest was the Midwest PHP User’s Group (later known as Chicago PHP), which held its first meeting in February 1997. This was the beginning of what would become a vibrant, energetic community of developers assembled over an affinity for a little tool that helped them solve problems on the Web. The ubiquity of PHP made it a natural choice for Web development. It became especially popular in the shared hosting world, and its low barrier to entry was attractive to many early Web developers.

With a growing community came an assortment of tools and resources for PHP developers. The year 2000 — a watershed moment for PHP — witnessed the first PHP Developers’ Meeting, a gathering of the core developers of the programming language, who met in Tel Aviv to discuss the forthcoming 4.0 release. PHP Extension and Application Repository (PEAR) also launched in 2000 to provide high-quality userland code packages following standards and best practices. The first PHP conference, PHP Kongress, was held in Germany soon after. PHPDeveloper.org came online, and to this day, it is the most authoritative news source in the PHP community.

This communal momentum proved vital to PHP’s growth in subsequent years, and as the Web development industry erupted, so did PHP. PHP began powering more and larger websites. More user groups formed around the world. Mailing lists; online forums; IRC; conferences; trade journals such as php[architect], the German PHP Magazin, and International PHP Magazine — the vibrancy of the PHP community had a significant impact on the way Web work would be done: collectively and openly, with an emphasis on code sharing.

Then, 10 years ago, shortly after the release of PHP 5, an interesting thing happened in Web development that created a general shift in how the PHP community built libraries and applications: Ruby on Rails was released.

The rise of frameworks

The Ruby on Rails framework for the Ruby programming language created an increased focus and attention on the MVC (model-view-controller) architectural pattern. The Mojavi PHP framework a few years prior had used this pattern, but the hype around Ruby on Rails is what firmly cemented MVC in the PHP frameworks that followed. Frameworks exploded in the PHP community, and frameworks have changed the way developers build PHP applications.

Many important projects and developments have arisen, thanks to the proliferation of frameworks in the PHP community. The PHP Framework Interoperability Group formed in 2009 to aid in establishing coding standards, naming conventions, and best practices among frameworks. Codifying these standards and practices helped provide more interoperable software for developers using member projects’ code. This interoperability meant that each framework could be split into components and stand-alone libraries could be used together with monolithic frameworks. With interoperability came another important milestone: The Composer project was born in 2011.

Inspired by Node.js’s NPM and Ruby’s Bundler, Composer has ushered in a new era of PHP application development, creating a PHP renaissance of sorts. It has encouraged interoperability between packages, standard naming conventions, adoption of coding standards, and increased test coverage. It is an essential tool in any modern PHP application.

The need for speed and innovation

Today, the PHP community has a thriving ecosystem of applications and libraries. Some of the most widely installed PHP applications include WordPress, Drupal, Joomla, and MediaWiki. These applications power the Web presence of businesses of all sizes, from small mom-and-pop shops to sites like whitehouse.gov and Wikipedia. Six of the Alexa top 10 sites use PHP to serve billions of pages a day. As a result, PHP applications have been optimized for speed — and much innovation has gone into PHP core to improve performance.

In 2010, Facebook unveiled its HipHop for PHP source-to-source compiler, which translates PHP code into C++ code and compiles it into a single executable binary application. Facebook’s size and growth necessitated the move away from standard interpreted PHP code to a faster, optimized executable. However, Facebook wanted to continue using PHP for its ease of use and rapid development cycles. HipHop for PHP evolved into HHVM, a JIT (just-in-time) compilation-based execution engine for PHP, which included a new language based on PHP: Hack.

Facebook’s innovations, as well as other VM projects, created competition at the engine level, leading to discussions about the future of the Zend Engine that still powers PHP’s core, as well as the question of a language specification. In 2014, a language specification project was created “to provide a complete and concise definition of the syntax and semantics of the PHP language,” making it possible for compiler projects to create interoperable PHP implementations.

The next major version of PHP became a topic of intense debate, and a project known as phpng (next generation) was offered as an option to clean up, refactor, optimize, and improve the PHP code base, which also showed substantial improvements to the performance of real-world applications. After deciding to name the next major version “PHP 7,” due to a previous, unreleased PHP 6.0 version, the phpng branch was merged in, and plans were made to proceed with PHP 7, working in many of the language features offered by Hack, such as scalar and return type hinting.

With the first PHP 7 alpha release due out today and benchmarks showing performance as good as or better than that of HHVM in many cases, PHP is keeping up with the pace of modern Web development needs. Likewise, the PHP-FIG continues to innovate and push frameworks and libraries to collaborate and cooperate — most recently with the adoption of PSR-7, which will change the way PHP projects handle HTTP. User groups, conferences, publications, and initiatives like PHPMentoring.org continue to advocate best practices, coding standards, and testing to the PHP developer community.

PHP has seen the Web mature through various stages, and PHP has matured. Once a simple API wrapper around lower-level C libraries, PHP has become a full-fledged programming language in its own right. Its developer community is vibrant and helpful, priding themselves in pragmatism and welcoming newcomers. PHP has stood the test of time for 20 years, and current activity in the language and community is ensuring it will be a relevant and useful language for years to come.

During his SunshinePHP keynote, Rasmus Lerdorf reflected, “Did I think I’d be here 20 years later talking about this silly little project I did? No, I didn’t.”

Here’s to Lerdorf and the rest of the PHP community for transforming this “silly little project” into a lasting, powerful component of the Web today.

[“Source-infoworld”]

Grids battle against lost hypergrid content

Most people who’ve traveled the hypergrid have had the experience of not being able to bring stuff back home, or of mysteriously not being able to take content to other grids.

Recently, a number of different OpenSim developers have taken on the fight, and have been reporting progress.

Crista Lopes steps in

Crista Lopes, the woman who invented the hypergrid, stepped in to fix one bug, first reported on the OpenSim “Mantis” bug-tracking page in March.

Dreamscape grid resident “Xantis” reported not being able to bring items home from OSgrid.

Outworldz grid founder Fred Beckhusen, also known as “Ferd Frederix” in-world, confirmed the problem — and the workaround.

“If I wear the item at the foreign grid, and return home, I can detach it and rez it,” he reported. “Otherwise, no rez allowed.”

This was the same work-around recommended by Mal Burns of the Inworld Review.

But it’s too early for the fix to go into mainstream OpenSim, so it’s up to individual grids to install it.

The new DigiWorldz Hyper Mall.

The new DigiWorldz Hyper Mall features a mix of both free and commercial content.

“If grid owners have in-house people to modify their code for them, they could simply grab the code from that submitted revision and merge it into their existing code,” said DigiWorldz grid founder Terry Ford, also known as “Butch Arnold” in-world. In addition to managing the technology for DigiWorldz, Ford also takes care of 3rd Rock Grid and the Great Canadian Grid.

“If they don’t have in house people, they’ll either have to hire someone to do it… or wait until a new release which includes this code becomes available,” he told Hypergrid Business. He did not recommend running the experimental code released after the fix was made.

According to Ford, for a user to bring content from one grid to another without wearing it on the first grid, both grids have to be running the fixed code.

The bug only applies to grids that use the “suitcase” functionality, said Tim Rogers, CEO of the Zetamex OpenSim hosting company, which recently began taking new orders again for grid hosting and region hosting.

The suitcase — a folder inside the avatar inventory that can be accessed while on other grids — was originally designed to help protect hypergrid travelers. Only content inside the folder could be accessed on foreign grids, preventing rogue grid owners from stealing visitors’ inventories. Some users, however, find that the suitcase just gets in their way, and they only spend time on reputable grids, or, if they do visit a grid on the wrong side of the tracks, and that grid wants to steal some stranger’s inventory filled with thousands of items all labeled “primitive,” more power to them.

The Lani Mall on OSgrid's Lani region is home to more than 50 shops offering over 2,000 different products, many freebies with a science fiction-theme.

The Lani Mall on OSgrid’s Lani region is home to more than 50 shops offering over 2,000 different products, many freebies with a science fiction-theme.

“Every grid owner that signs up with us, always asks us to disable the suitcase because it complicates things for their customers,” Rogers told Hypergrid Business.

However, Rogers said, Zetamex will be rolling out the patch to its customers.

Dreamland Metaverse, another OpenSim hosting provider, has been testing the new code over the past few days, and is planning to roll it out soon.

“We have finished testing the latest version and already rolled out that version for our regions on OSgrid,” CEO Dierk Brunner told Hypergrid Business.

Another content-related issue, first reported yesterday, is a problem that occurs when content moves between grids that run on Windows and those that don’t. Some grids have figured out how to clean up corrupted content in their databases while they wait for a more permanent fix.

Kitely fixes content export problems

The Kitely Market delivers to about 100 different grids.

The Kitely Market delivers to about 100 different grids.

Earlier this week, Kitely fixed its own content problem, where users would home content from other grids — but then would not be able to take that content back out with them while traveling.

The problem would crop up when one user would upload content to the grid, such as a Linda Kellie freebie, without copy and transfer permissions, and another user would bring that same content in from a foreign grid, then it would never be allowed to leave.

“But if the same item first came into Kitely from another grid, then it would be allowed to leave,” said Oren Hurvitz, Kitely‘s co-founder and VP of R&D, in an announcement. “This made no sense. It also didn’t add much to security, since we still couldn’t prevent items from being taken out using Copybots.”

The fix only applies to new items, not to items already in a user’s inventory, he said. And it doesn’t affect the “export” permission set by merchants who sell content in the Kitely Market.

The Kitely Market currently carries about 10,000 different items, 60 percent of which can be delivered from the website directly to avatars on foreign grids.

[“Source-hypergridbusiness”]