Microsoft 365 Is The Office And Windows Bundle Targeted At Business Users

Microsoft has just unveiled Microsoft 365, which bundles together Office 365, Windows 10, and Enterprise Mobility + Security, giving “a complete, intelligent and secure solution to empower employees.”

Microsoft Announces New Office 365 Plans For Businesses

Essentially, Microsoft 365 is a new way for enterprises to purchase Office and Windows together, bundling the company’s mainline software into a single subscription. In addition, it’ll also offer users Microsoft 365 Business, debuting via public preview come Aug. 2. It includes Office 365 Business Premium and security and management features for Office software and devices running Windows 10.

Microsoft’s CEO Satya Nadella unveiled both types of bundles at its Inspire partner conference, attended by 17,000 people, who were there to hear about Microsoft’s partnerships and other plans.

Microsoft says the workplace is changing, especially by virtue of teams often being distributed globally. From such trends, the company observes a new culture that’s emerging. Its new plans are a reflection of those.

Microsoft 365 Enterprise And 365 Business Plans And Release Date

Microsoft 365 Enterprise will be offered in two plans: Microsoft 365 E3 and Microsoft 365 E5. Both will launch on Aug. 1. Microsoft hasn’t laid the details on pricing yet, but says it’ll depend on the specific plan and “other factors.”

Microsoft 365 Business, meanwhile, will launch its full stable release later this fall following the public preview on Aug. 2. It will cost each user $20 a month.

Ahead of both release dates, Microsoft will let users try three applications coming to both Office 365 Business Premium and Microsoft 365 Business. These applications include Microsoft Connections, an email marketing service; Microsoft Listings, a publishing tool for business information; and Microsoft Invoicing, which is pretty self-explanatory.

The company has also included MileIQ, its mileage tracking app, into Office 365 Business Premium. In addition, Microsoft has also launched Azure Stack, which allows businesses to host their own hybrid cloud. Several companies including HP, Lenovo, and Dell are all building systems to run Azure Stack, the first shipments of which launches September.

Microsoft’s cloud business has been one of its most profitable units in recent years, a sort of saving grace from the tumble of its Windows Phone venture and other less alluring products and services. As the company treads the way of the cloud further, we might see Microsoft approach cloud-based services more extensively going forward.

“We are incredibly enthusiastic about Microsoft 365 and how it will help customers and partners drive growth and innovation,” said Microsoft.

Thoughts about Microsoft new Office 365 bundles? Feel free to sound off in the comments section below!

[“source-techtimes”]

Look back at Mac OS X’s history with 5K versions of all the default wallpapers

Mac OS X / macOS has been a fundamental part of Apple’s modern-day renaissance. Throughout the years, the company has graced each version of its computer operating system with default desktop wallpaper that has ranged from instantly iconic to, well, some really nice pictures of mountains.

But most of the older wallpapers were never really designed to be used on a higher-resolution screen, so if you’ve been looking to use the classic Aqua wallpaper from OS 10.3 Panther or 10.5 Leopard’s famous Aurora on your fancy new 5K Mac, you’ve been pretty much out of luck.

Fortunately, Apple aficionado Stephen Hackett of 512Pixels, in partnership with Twitter user @forgottentowel, has created a centralized place to find upscaled 5K resolution versions of every main OS X wallpaper ever made. They’re ideal on a Retina display with your current-gen iMac or MacBook Pro.

Image: Apple

As a warning, there’s only so much you can do even with upscaling the older images to modern resolutions, so while you shouldn’t expect razor-sharp crispness, it’s still better than using the original 1024 x 768 OS 10.1 wallpaper that natively shipped with Cheetah.

[“Source-theverge”]

PHP at 20: From pet project to powerhouse

PHP at 20: From pet project to powerhouse

When Rasmus Lerdorf released “a set of small tight CGI binaries written in C,” he had no idea how much his creation would impact Web development. Delivering the opening keynote at this year’s SunshinePHP conference in Miami, Lerdorf quipped, “In 1995, I thought I had unleashed a C API upon the Web. Obviously, that’s not what happened, or we’d all be C programmers.”

In fact, when Lerdorf released version 1.0 of Personal Home Page Tools — as PHP was then known — the Web was very young. HTML 2.0 would not be published until November of that year, and HTTP/1.0 not until May the following year. NCSA HTTPd was the most widely deployed Web server, and Netscape Navigator was the most popular Web browser, with Internet Explorer 1.0 to arrive in August. In other words, PHP’s beginnings coincided with the eve of the browser wars.

Those early days speak volumes about PHP’s impact on Web development. Back then, our options were limited when it came to server-side processing for Web apps. PHP stepped in to fill our need for a tool that would enable us to do dynamic things on the Web. That practical flexibility captured our imaginations, and PHP has since grown up with the Web. Now powering more than 80 percent of the Web, PHP has matured into a scripting language that is especially suited to solve the Web problem. Its unique pedigree tells a story of pragmatism over theory and problem solving over purity.

The Web glue we got hooked on

PHP didn’t start out as a language, and this is clear from its design — or lack thereof, as detractors point out. It began as an API to help Web developers access lower-level C libraries. The first version was a small CGI binary that provided form-processing functionality with access to request parameters and the mSQL database. And its facility with a Web app’s database would prove key in sparking our interest in PHP and PHP’s subsequent ascendancy.

By version 2 — aka PHP/FI — database support had expanded to include PostgreSQL, MySQL, Oracle, Sybase, and more. It supported these databases by wrapping their C libraries, making them a part of the PHP binary. PHP/FI could also wrap the GD library to create and manipulate GIF images. It could be run as an Apache module or compiled with FastCGI support, and it introduced the PHP script language with support for variables, arrays, language constructs, and functions. For many of us working on the Web at that time, PHP was the kind of glue we’d been looking for.

As PHP folded in more and more programming language features, morphing into version 3 and onward, it never lost this gluelike aspect. Through repositories like PECL (PHP Extension Community Library), PHP could tie together libraries and expose their functionality to the PHP layer. This capacity to bring together components became a significant facet of the beauty of PHP, though it was not limited to its source code.

The Web as a community of coders

PHP’s lasting impact on Web development isn’t limited to what can be done with the language itself. How PHP work is done and who participates — these too are important parts of PHP’s legacy.

As early as 1997, PHP user groups began forming. One of the earliest was the Midwest PHP User’s Group (later known as Chicago PHP), which held its first meeting in February 1997. This was the beginning of what would become a vibrant, energetic community of developers assembled over an affinity for a little tool that helped them solve problems on the Web. The ubiquity of PHP made it a natural choice for Web development. It became especially popular in the shared hosting world, and its low barrier to entry was attractive to many early Web developers.

With a growing community came an assortment of tools and resources for PHP developers. The year 2000 — a watershed moment for PHP — witnessed the first PHP Developers’ Meeting, a gathering of the core developers of the programming language, who met in Tel Aviv to discuss the forthcoming 4.0 release. PHP Extension and Application Repository (PEAR) also launched in 2000 to provide high-quality userland code packages following standards and best practices. The first PHP conference, PHP Kongress, was held in Germany soon after. PHPDeveloper.org came online, and to this day, it is the most authoritative news source in the PHP community.

This communal momentum proved vital to PHP’s growth in subsequent years, and as the Web development industry erupted, so did PHP. PHP began powering more and larger websites. More user groups formed around the world. Mailing lists; online forums; IRC; conferences; trade journals such as php[architect], the German PHP Magazin, and International PHP Magazine — the vibrancy of the PHP community had a significant impact on the way Web work would be done: collectively and openly, with an emphasis on code sharing.

Then, 10 years ago, shortly after the release of PHP 5, an interesting thing happened in Web development that created a general shift in how the PHP community built libraries and applications: Ruby on Rails was released.

The rise of frameworks

The Ruby on Rails framework for the Ruby programming language created an increased focus and attention on the MVC (model-view-controller) architectural pattern. The Mojavi PHP framework a few years prior had used this pattern, but the hype around Ruby on Rails is what firmly cemented MVC in the PHP frameworks that followed. Frameworks exploded in the PHP community, and frameworks have changed the way developers build PHP applications.

Many important projects and developments have arisen, thanks to the proliferation of frameworks in the PHP community. The PHP Framework Interoperability Group formed in 2009 to aid in establishing coding standards, naming conventions, and best practices among frameworks. Codifying these standards and practices helped provide more interoperable software for developers using member projects’ code. This interoperability meant that each framework could be split into components and stand-alone libraries could be used together with monolithic frameworks. With interoperability came another important milestone: The Composer project was born in 2011.

Inspired by Node.js’s NPM and Ruby’s Bundler, Composer has ushered in a new era of PHP application development, creating a PHP renaissance of sorts. It has encouraged interoperability between packages, standard naming conventions, adoption of coding standards, and increased test coverage. It is an essential tool in any modern PHP application.

The need for speed and innovation

Today, the PHP community has a thriving ecosystem of applications and libraries. Some of the most widely installed PHP applications include WordPress, Drupal, Joomla, and MediaWiki. These applications power the Web presence of businesses of all sizes, from small mom-and-pop shops to sites like whitehouse.gov and Wikipedia. Six of the Alexa top 10 sites use PHP to serve billions of pages a day. As a result, PHP applications have been optimized for speed — and much innovation has gone into PHP core to improve performance.

In 2010, Facebook unveiled its HipHop for PHP source-to-source compiler, which translates PHP code into C++ code and compiles it into a single executable binary application. Facebook’s size and growth necessitated the move away from standard interpreted PHP code to a faster, optimized executable. However, Facebook wanted to continue using PHP for its ease of use and rapid development cycles. HipHop for PHP evolved into HHVM, a JIT (just-in-time) compilation-based execution engine for PHP, which included a new language based on PHP: Hack.

Facebook’s innovations, as well as other VM projects, created competition at the engine level, leading to discussions about the future of the Zend Engine that still powers PHP’s core, as well as the question of a language specification. In 2014, a language specification project was created “to provide a complete and concise definition of the syntax and semantics of the PHP language,” making it possible for compiler projects to create interoperable PHP implementations.

The next major version of PHP became a topic of intense debate, and a project known as phpng (next generation) was offered as an option to clean up, refactor, optimize, and improve the PHP code base, which also showed substantial improvements to the performance of real-world applications. After deciding to name the next major version “PHP 7,” due to a previous, unreleased PHP 6.0 version, the phpng branch was merged in, and plans were made to proceed with PHP 7, working in many of the language features offered by Hack, such as scalar and return type hinting.

With the first PHP 7 alpha release due out today and benchmarks showing performance as good as or better than that of HHVM in many cases, PHP is keeping up with the pace of modern Web development needs. Likewise, the PHP-FIG continues to innovate and push frameworks and libraries to collaborate and cooperate — most recently with the adoption of PSR-7, which will change the way PHP projects handle HTTP. User groups, conferences, publications, and initiatives like PHPMentoring.org continue to advocate best practices, coding standards, and testing to the PHP developer community.

PHP has seen the Web mature through various stages, and PHP has matured. Once a simple API wrapper around lower-level C libraries, PHP has become a full-fledged programming language in its own right. Its developer community is vibrant and helpful, priding themselves in pragmatism and welcoming newcomers. PHP has stood the test of time for 20 years, and current activity in the language and community is ensuring it will be a relevant and useful language for years to come.

During his SunshinePHP keynote, Rasmus Lerdorf reflected, “Did I think I’d be here 20 years later talking about this silly little project I did? No, I didn’t.”

Here’s to Lerdorf and the rest of the PHP community for transforming this “silly little project” into a lasting, powerful component of the Web today.

[“Source-infoworld”]

Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

Andrew Brust is senior director of market strategy and intelligence at Datameer.

 

 

[Source:- IW]

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

 

[Source:- Infoworld]

 

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Wrists-on with Garmin’s new fenix 5 lineup at CES 2017

If you’re a serious athlete that’s been looking for a powerful multisport fitness watch, odds are you’ve stumbled across Garmin’s fēnix line of devices. While they are quite pricey, the fēnix 3 line has proven to be one of the most powerful multisport watches on the market.

At CES 2017, Garmin has unveiled three new entries to its fēnix lineup, the fēnix 5, fēnix 5S and fēnix 5X.

As the names might suggest, all three of these new devices are in the same family, so they all sport most of the same features. There are a few big differentiators between the three, though. The fēnix 5S, for instance, is a lighter, sleeker and smaller version of the standard fēnix 5. The fēnix 5 is the standard model, sporting all the same features as the 5S in a bigger form factor. The 5X is the highest-end device in the bunch, complete with preloaded wrist-based mapping.

The fēnix 5 is the standard model of the group. Measuring 47mm, it’s more compact than previous models like the fēnix 3HR, but still packs all the multisport features you’d come to expect from the series.

Garmin says the fēnix 5S is the first watch in the line designed specifically for female adventurers. Measuring just 42mm, the 5S is small and comfortable for petite wrists, without compromising any multisport features. It’s available in silver with either a white, turquoise or black silicone band color options with a mineral glass lens.

There’s also a fēnix 5S Sapphire model with a scratch-resistant sapphire lens that’s available in black with a black band, champagne with a water-resistant gray suede band, or champagne with a metal band. This model also comes with an extra silicone QuickFit band.

fenix 5X

The higher-end fēnix 5X measures 51mm and comes preloaded with TOPO US mapping, routable cycling maps and other navigation features like Round Trip Run and Round Trip Ride. With these new features, users can enter how far they’d like to run or ride, and their watch will suggest appropriate courses to choose from. The 5X will also display easy-to-read guidance cues for upcoming turns, allowing users to be aware of their route.

In addition, the 5X can use Around Me map mode to see different points of interest and other map objects within the user’s range to help users be more aware of their surroundings. This model will be available with a scratch-resistant sapphire lens.

 

[Source:- Androidauthority]

 

New Mac Pro release date rumours UK | Mac Pro 2016 tech specs: Kaby Lake processors expected at March 2017 Mac Pro update

When will Apple release a new Mac Pro? And what new features, specs and design changes should we expect when Apple updates the Mac Pro line for 2016? Is there any chance Apple will discontinue the Mac Pro instead of updating it?

Apple’s Mac Pro line-up could do with an update. The current Mac Pro model was announced at WWDC in June 2013 and, for a top-of-the range system, the Mac Pro is looking pretty long in the tooth. But when will Apple announce a new Mac Pro? And what hardware improvements, design changes, tech specs and new features will we see in the new Mac Pro for 2016? (Or 2017, or…)

There’s some good news for expectant Mac Pro fans: code in Mac OS X El Capitanhints that a new Mac Pro (one with 10 USB 3 ports) could arrive soon. But nothing is certain at this point, and some pundits believe the Mac Pro should simply be discontinued.

Whatever the future holds for the Mac Pro, in this article we will be looking at all the rumours surrounding the next update of the Mac Pro line: the new Mac Pro’s UK release date and pricing, its expected design, and the new features and specs we hope to see in the next version of the Mac Pro.

Updated on 6 December 2016 to discuss the chances of a new Mac Pro appearing in March; and on 15 November with updated processor rumours

For more discussion of upcoming Apple launches, take a look at our New iMac rumours and our big roundup of Apple predictions for 2017. And if you’re considering buying one of the current Mac Pro models, read Where to buy Mac Pro in the UK and our Mac buying guide.

 

 


[Source:- Macworld]

Third Xiaomi Redmi 3S Prime, Redmi 3S flash sale today at 12 PM

Xiaomi Redmi 3S launch

If you’ve missed your chance at picking up the Xiaomi Redmi 3S or Redmi 3S Prime in the first two flash sales, you will have another opportunity to do so once again, today at 12 PM.

You only have to look back at the last two flash sales to see the popularity of these ultra-affordable smartphones from Xiaomi. In the first sale held two weeks ago, the Redmi 3S Prime sold out in just eight minutes, and in the second, which also saw the availability of the Redmi 3S, stocks lasted till only around 5 PM that evening.

If you’re wondering about which device is better suited to your needs, it is worth noting that most specifications and features are the same between the two versions, including the huge 4,100 mAh battery that is found with both. As far as differences go, the cheaper Redmi 3S comes with 16 GB of storage and 2 GB of RAM, while the Redmi 3S Prime offers 32 GB of internal storage and 3 GB of RAM, and also provides an additional layer of security with a fingerprint scanner.

The Xiaomi Redmi 3S is priced at Rs 6,999 (~$104), while the Redmi 3S Prime is only slightly more expensive, with its price tag of Rs 8,999 (~$134). The two devices will available from bothmi.com and Flipkart, with Flipkart also providing consumers with an exchange offer. So if you have an old Android smartphone lying around, you can always trade that in and pick up these affordable smartphones for even cheaper.

Let us know if you were able to pick up the Xiaomi Redmi 3S or Redmi 3S Prime during today’s sale, and if you already have the phone, do share your thoughts on what your experience has been with it so far in the comments section below!

 

 

[Source: Androidauthority]