Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

Andrew Brust is senior director of market strategy and intelligence at Datameer.

 

 

[Source:- IW]

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

 

[Source:- Infoworld]

 

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Wrists-on with Garmin’s new fenix 5 lineup at CES 2017

If you’re a serious athlete that’s been looking for a powerful multisport fitness watch, odds are you’ve stumbled across Garmin’s fēnix line of devices. While they are quite pricey, the fēnix 3 line has proven to be one of the most powerful multisport watches on the market.

At CES 2017, Garmin has unveiled three new entries to its fēnix lineup, the fēnix 5, fēnix 5S and fēnix 5X.

As the names might suggest, all three of these new devices are in the same family, so they all sport most of the same features. There are a few big differentiators between the three, though. The fēnix 5S, for instance, is a lighter, sleeker and smaller version of the standard fēnix 5. The fēnix 5 is the standard model, sporting all the same features as the 5S in a bigger form factor. The 5X is the highest-end device in the bunch, complete with preloaded wrist-based mapping.

The fēnix 5 is the standard model of the group. Measuring 47mm, it’s more compact than previous models like the fēnix 3HR, but still packs all the multisport features you’d come to expect from the series.

Garmin says the fēnix 5S is the first watch in the line designed specifically for female adventurers. Measuring just 42mm, the 5S is small and comfortable for petite wrists, without compromising any multisport features. It’s available in silver with either a white, turquoise or black silicone band color options with a mineral glass lens.

There’s also a fēnix 5S Sapphire model with a scratch-resistant sapphire lens that’s available in black with a black band, champagne with a water-resistant gray suede band, or champagne with a metal band. This model also comes with an extra silicone QuickFit band.

fenix 5X

The higher-end fēnix 5X measures 51mm and comes preloaded with TOPO US mapping, routable cycling maps and other navigation features like Round Trip Run and Round Trip Ride. With these new features, users can enter how far they’d like to run or ride, and their watch will suggest appropriate courses to choose from. The 5X will also display easy-to-read guidance cues for upcoming turns, allowing users to be aware of their route.

In addition, the 5X can use Around Me map mode to see different points of interest and other map objects within the user’s range to help users be more aware of their surroundings. This model will be available with a scratch-resistant sapphire lens.

 

[Source:- Androidauthority]

 

New Mac Pro release date rumours UK | Mac Pro 2016 tech specs: Kaby Lake processors expected at March 2017 Mac Pro update

When will Apple release a new Mac Pro? And what new features, specs and design changes should we expect when Apple updates the Mac Pro line for 2016? Is there any chance Apple will discontinue the Mac Pro instead of updating it?

Apple’s Mac Pro line-up could do with an update. The current Mac Pro model was announced at WWDC in June 2013 and, for a top-of-the range system, the Mac Pro is looking pretty long in the tooth. But when will Apple announce a new Mac Pro? And what hardware improvements, design changes, tech specs and new features will we see in the new Mac Pro for 2016? (Or 2017, or…)

There’s some good news for expectant Mac Pro fans: code in Mac OS X El Capitanhints that a new Mac Pro (one with 10 USB 3 ports) could arrive soon. But nothing is certain at this point, and some pundits believe the Mac Pro should simply be discontinued.

Whatever the future holds for the Mac Pro, in this article we will be looking at all the rumours surrounding the next update of the Mac Pro line: the new Mac Pro’s UK release date and pricing, its expected design, and the new features and specs we hope to see in the next version of the Mac Pro.

Updated on 6 December 2016 to discuss the chances of a new Mac Pro appearing in March; and on 15 November with updated processor rumours

For more discussion of upcoming Apple launches, take a look at our New iMac rumours and our big roundup of Apple predictions for 2017. And if you’re considering buying one of the current Mac Pro models, read Where to buy Mac Pro in the UK and our Mac buying guide.

 

 


[Source:- Macworld]

Third Xiaomi Redmi 3S Prime, Redmi 3S flash sale today at 12 PM

Xiaomi Redmi 3S launch

If you’ve missed your chance at picking up the Xiaomi Redmi 3S or Redmi 3S Prime in the first two flash sales, you will have another opportunity to do so once again, today at 12 PM.

You only have to look back at the last two flash sales to see the popularity of these ultra-affordable smartphones from Xiaomi. In the first sale held two weeks ago, the Redmi 3S Prime sold out in just eight minutes, and in the second, which also saw the availability of the Redmi 3S, stocks lasted till only around 5 PM that evening.

If you’re wondering about which device is better suited to your needs, it is worth noting that most specifications and features are the same between the two versions, including the huge 4,100 mAh battery that is found with both. As far as differences go, the cheaper Redmi 3S comes with 16 GB of storage and 2 GB of RAM, while the Redmi 3S Prime offers 32 GB of internal storage and 3 GB of RAM, and also provides an additional layer of security with a fingerprint scanner.

The Xiaomi Redmi 3S is priced at Rs 6,999 (~$104), while the Redmi 3S Prime is only slightly more expensive, with its price tag of Rs 8,999 (~$134). The two devices will available from bothmi.com and Flipkart, with Flipkart also providing consumers with an exchange offer. So if you have an old Android smartphone lying around, you can always trade that in and pick up these affordable smartphones for even cheaper.

Let us know if you were able to pick up the Xiaomi Redmi 3S or Redmi 3S Prime during today’s sale, and if you already have the phone, do share your thoughts on what your experience has been with it so far in the comments section below!

 

 

[Source: Androidauthority]

Docker, microservices will loom large at JavaOne

Docker, microservices will loom large at JavaOne

Don’t count out Java when it comes to fitting in with the latest computing paradigms. Oracle and other Java proponents are looking to keep the 21-year-old platform current by working with technologies like Docker containers, microservices, and the internet of things (IoT).

These technologies and a multitude of others, including JavaScript and modular Java, are noted in the session list for the upcoming JavaOne conference, the annual Java technical event being held in San Francisco beginning Sept. 18.

The session list serves as a gauge of priorities for the popular enterprise computing platform and language. Also at the event, Oracle is expected to roll out retooled plans to better equip enterprise Java for cloud computing and microservices. The move comes amid community concerns that Java EE’s development has stalled.

Docker containerization continues to command attention from Java advocates as well. Heroku is slated to present on how to use Docker to replicate an application architecture in a session on parity between development and production environments. Couchbase’s Arun Gupta, a longtime proponent of Java, will present on Docker support in Java IDEs, including NetBeans, Java, and IntelliJ. He also will discuss building a private CI/CD pipeline on the Oracle Cloud platform with Java and Docker.

Google, meanwhile, will address deployment of Java applications and services at scale using Docker and the Kubernetes container orchestration, while CloudBees will discuss a Docker platform featuring Jenkins continuous integration.

Oracle recently announced intentions to make microservices accommodations in the JVM, and microservices will get a hearing at JavaOne, with Red Hat presenting on Java EE 7 and secured microservices using the Wildfly Swarm Java packager and other technologies. ClassPass, meanwhile, will provide an introduction to microservices in Java, and Tibco will talk about cloud-native microservices, featuring container solutions like Docker. IBM plans to discuss Java EE microservices in situations ranging from the Raspberry Pi board to the cloud.

For IoT, Oracle will give a presentation on designing a lightweight Java-powered gateway architecture. The company will also discuss IoT and Java ME (Micro Edition) as it relates to the Oracle Internet of Things Cloud Service. The Eclipse Foundation, meanwhile, will tackle IoT from an open source perspective.

Oracle will cover modularization planned for Java 9, which is due next March; Java EE, the troubled server-side version of the platform, will be explored as well. Dassault Systems will give a presentation on conducting builds via Gradle, and Payara Systems will cover baking reactive behavior into EE applications. A Java developer at Tomitribe will talk about developing applications using both Java EE 7 and Java SE 8.

Java’s coexistence with JavaScript also gets a nod. IBM will present on emerging Web app architectures with the Node.js server-side JavaScript platform and Java. The session will cover architectures bringing together the Web scale and browser experience characteristics of Node with the transactional characteristics of Java. For its part, Oracle will cover monitoring Web systems that have JavaScript UI logic coupled with Java on the server in a discussion on Oracle Application Performance Monitoring Cloud Service.

Another JavaScript technology, the Angular framework, will be the subject of a presentation by LTE Consulting about building Angular 2 applications with Java 8 via the Angular2Boot framework.

 

[Source: Javaworld]

 

GitHub takes aim at downtime with DGit

GitHub takes aim at downtime with DGit

GitHub has rolled out a new feature that it claims will make the widely used code hosting platform far less prone to downtime.

Distributed Git (DGit) uses the sync mechanisms of the Git protocol to replicate GitHub’s repositories among three servers. Should one server go offline because of a mishap or for maintenance, traffic can be redirected to the other two.

Using Git as the replication mechanism provides companies with a little more flexibility than simply mirroring blocks between disks, according to GitHub. “Think of the replicas as three loosely coupled Git repositories kept in sync via Git protocols, rather than identical disk images full of repositories,” says the blog post describing the new system. Read operations can be directed to a specific replica if needed, and new replicas of a given repository can be created if a file server has to be taken offline.

Another advantage to using Git is that GitHub understands the protocol, which is heavily optimized for synchronizing between systems. “Why reinvent the wheel when there is a Formula One racing car already available?” says GitHub. Using Git also means the failover process requires less manual intervention, and failover servers are not simply sitting idle; they’re actively used for serving read operations and receiving writes.

fileservers 1

GitHub

GitHub’s new Git-driven synchronization architecture makes it easier for multiple, redundant copies of repositories to be made available. All the replicas are live, and can serve read-only files and accept writes as needed.

The rollout of DGit has been a gradual process. GitHub ported its own repositories first, testing to make sure they still worked correctly, then started moving third-party and public repositories. Next came the busiest and most heavily trafficked repos, “to get as much traffic and as many different usage patterns into DGit as we could.” Currently, 58 percent of all repositories have been moved. The rest are slated to follow “as quickly as we can,” GitHub says, since moving to DGit is “a key foundation that will enable more upcoming innovations.”

The biggest advantage of DGit is less downtime. Even a small amount of GitHub downtime — whether because of disaster or attacks — leaves many projects and organizations temporarily crippled.

Third parties have addressed GitHub downtime with both complementary products, like Anam.io’s repository backup services, and competing products, like the GitLab open source alternative. But for many organizations, it could be easier to turn to GitHub and its increasingly ambitious enterprise solutions to do the heavy lifting.

 

[Source:- JW]

GitHub takes aim at downtime with DGit

GitHub takes aim at downtime with DGit

GitHub has rolled out a new feature that it claims will make the widely used code hosting platform far less prone to downtime.

Distributed Git (DGit) uses the sync mechanisms of the Git protocol to replicate GitHub’s repositories among three servers. Should one server go offline because of a mishap or for maintenance, traffic can be redirected to the other two.

Using Git as the replication mechanism provides companies with a little more flexibility than simply mirroring blocks between disks, according to GitHub. “Think of the replicas as three loosely coupled Git repositories kept in sync via Git protocols, rather than identical disk images full of repositories,” says the blog post describing the new system. Read operations can be directed to a specific replica if needed, and new replicas of a given repository can be created if a file server has to be taken offline.

Another advantage to using Git is that GitHub understands the protocol, which is heavily optimized for synchronizing between systems. “Why reinvent the wheel when there is a Formula One racing car already available?” says GitHub. Using Git also means the failover process requires less manual intervention, and failover servers are not simply sitting idle; they’re actively used for serving read operations and receiving writes.

fileservers 1

GitHub

GitHub’s new Git-driven synchronization architecture makes it easier for multiple, redundant copies of repositories to be made available. All the replicas are live, and can serve read-only files and accept writes as needed.

The rollout of DGit has been a gradual process. GitHub ported its own repositories first, testing to make sure they still worked correctly, then started moving third-party and public repositories. Next came the busiest and most heavily trafficked repos, “to get as much traffic and as many different usage patterns into DGit as we could.” Currently, 58 percent of all repositories have been moved. The rest are slated to follow “as quickly as we can,” GitHub says, since moving to DGit is “a key foundation that will enable more upcoming innovations.”

The biggest advantage of DGit is less downtime. Even a small amount of GitHub downtime — whether because of disaster or attacks — leaves many projects and organizations temporarily crippled.

Third parties have addressed GitHub downtime with both complementary products, like Anam.io’s repository backup services, and competing products, like the GitLab open source alternative. But for many organizations, it could be easier to turn to GitHub and its increasingly ambitious enterprise solutions to do the heavy lifting.

 

[Source:- Javaworld]