New Mac Pro release date rumours UK | Mac Pro 2016 tech specs: Kaby Lake processors expected at March 2017 Mac Pro update

When will Apple release a new Mac Pro? And what new features, specs and design changes should we expect when Apple updates the Mac Pro line for 2016? Is there any chance Apple will discontinue the Mac Pro instead of updating it?

Apple’s Mac Pro line-up could do with an update. The current Mac Pro model was announced at WWDC in June 2013 and, for a top-of-the range system, the Mac Pro is looking pretty long in the tooth. But when will Apple announce a new Mac Pro? And what hardware improvements, design changes, tech specs and new features will we see in the new Mac Pro for 2016? (Or 2017, or…)

There’s some good news for expectant Mac Pro fans: code in Mac OS X El Capitanhints that a new Mac Pro (one with 10 USB 3 ports) could arrive soon. But nothing is certain at this point, and some pundits believe the Mac Pro should simply be discontinued.

Whatever the future holds for the Mac Pro, in this article we will be looking at all the rumours surrounding the next update of the Mac Pro line: the new Mac Pro’s UK release date and pricing, its expected design, and the new features and specs we hope to see in the next version of the Mac Pro.

Updated on 6 December 2016 to discuss the chances of a new Mac Pro appearing in March; and on 15 November with updated processor rumours

For more discussion of upcoming Apple launches, take a look at our New iMac rumours and our big roundup of Apple predictions for 2017. And if you’re considering buying one of the current Mac Pro models, read Where to buy Mac Pro in the UK and our Mac buying guide.

 

 


[Source:- Macworld]

MrMobile on the Sennheiser PXC 550: Better than Bose?

Image result for MrMobile,on,the,Sennheiser,PXC,550:,Better,than,Bose?

The Sennheiser PXC 550 can keep your ears closed off to the rest of the world. A must-have for those of you with long holiday travels and long holidays with family. They’re good-looking and long-lasting, just like everything you’d want in a … pair of noise-reducing headphones. With the added luxury of a touch-sensitive panel to help control what you’re listening to, you’ll really drive your … headphones wild.

MrMobile, in his infinite wisdom, will help you decide if these cans are what your ears have been looking (listening?) for, or if you’re better off with the Bose QC35. Take Michael Fisher’s advice, and you’ll be ready for all that December has to offer.

 

 

[Source:- Windowscentral]

An easier way to set up SQL Server on an Azure virtual machine

Image result for An easier way to set up SQL Server on an Azure virtual machine

setup procedure will allow users to configure SQL Server on Microsoft Azure without the aid of a database administrator.

“The new wizard for building and configuring a new virtual machine with SQL Server 2014 is very well put together,” said Denny Cherry, founder and principal consultant for Denny Cherry and Associates Consulting. “It helps solve a lot of the complexity of building a new SQL Server, specifically around how you need to configure the storage in order to get a high-performing SQL Server VM.”

Joseph D’Antoni, principal consultant at Denny Cherry and Associates Consulting, said that one of the major challenges with Azure was allocating storage. For instance, he said, to configure SQL Server on an Azure VM, you needed to allocate disks manually to get the needed amount of IOPS. This meant you had to know exactly what your storage application needs were for optimal performance, and many people were “kind of guessing,” D’Antoni said. With the new wizard, all you have to do is enter the required number of IOPS and storage is allocated automatically.

Automating SQL Server setup for an Azure VM means that no longer does everything have to be done manually: Connectivity, performance, security and storage are configured automatically during setup. “I think it does simplify what was a pretty complex process,” D’Antoni said.

You can now use the Internet to set up SQL Server connectivity and enable SQL Server authentication through the Azure Web portal. Previously, connecting SQL Server to an Azure VM via the Internet required a multistep process using SQL Server Management Studio. The new automated configuration process lets you pick whether to expand connectivity to the whole Azure virtual network or to connect only within the individual VM.

The new process for configuring SQL Server on an Azure virtual machine also includes automated patching and automated backup. The automated patching allows you pick a time when you want all your patches to occur. Users can schedule patches to minimize the impact they’ll have on the workload.  Automated backup allows you to specify how long to keep backups.

“I think that these are a great enhancement on the old process of having to know how to configure these components manually within the VM,” Cherry said, “because these configurations can get tricky to configure.”

D’Antoni added that this innovation is going to affect smaller organizations the most, because it means that they won’t need an expert to move SQL Server onto an Azure virtual machine. “[The simplified configuration] gives the power to someone who is deploying a VM when they would have needed an administrator or a DBA before. To that extent, it’s kind of a big deal.”

 
[Source:- searchsqlserver]

 

TSMC wants to build a new factory for 5nm and 3nm chips

bn-dq158_0710ts_gr_20140710075834

TSMC wants to make chipsets even smaller – I mean, who doesn’t? That means more room for other components all the while being more power-efficient. But before we get ahead of ourselves, the reality is that these 5nm and 3nm chipsets are a long way away. After all, TSMC hasn’t even unveiled its 10nm chips yet. According to leaked documents, TSMC will begin 10nm production this year for next iPhone’s A11 chipset.

However, the Taiwanese semiconductor giant certainly seems to be looking ahead as the competition to produce smaller and smaller chips grows. Nikkei Asian Review reports that TSMC is planning on building a $16 billion advanced chip facility in order to maintain its lead in the global market:

We’re asking the government to help us find a plot that is large enough and has convenient access so we can build an advanced chip plant to manufacture 5-nanometer and 3nm chips.

Easier said than done, however. The smaller the chip, the more advanced it is, but also the harder it is to manufacture. TSMC has previously said that the company is looking to push out 7nm chips by 2017 and 5nm chips by 2020, but I personally think that’s overly optimistic. Mass producing 10nm chips is hard enough as it is – and that’s precisely why Samsung is one of the very few companies with the necessary resources to do it. Thermal challenges as well as other optimization issues are going to take a significant amount of time to be addressed fully when it comes to even smaller structures.

Thermal challenges as well as other optimization issues are going to take a significant amount of time to be addressed fully when it comes to even smaller structures.

The race to see who produces the smallest chips the fastest will continue, but it is highly unlikely that we will see anything smaller than 10nm before 2018 or later.

 

 

 

[Source:- Androidauthority]

Lift language opens the door to cross-platform parallelism

Lift language opens the door to cross-platform parallelism

Wouldn’t it be great to write code that runs high-speed parallel algorithms on just about every kind of hardware out there, and without needing to be hand-tweaked to run well on GPUs versus CPUs?

That’s the promise behind a new project being developed by professors and students from the University of Edinburgh and the University of Münster, with support from Google. Together they’re proposing a new open source functional language, called “Lift,” for writing algorithms that run in parallel across a wide variety of hardware.

Lift creates code for OpenCL, a programming system designed to target CPUs, GPUs, and FPGAs alike as well as to automatically generate optimizations for each of those hardware types.

OpenCL can be optimized “by hand” to improve performance in different environments — on a GPU versus a regular CPU, for instance. Unfortunately, those optimizations aren’t portable across hardware types, and code has to be optimized for CPUs and GPUs separately. In some cases, OpenCL code optimized for GPUs won’t even work at all on a CPU. Worse, the optimizations in question are tedious to implement by hand.

Lift is meant to work around all this. In language-hacker terms, Lift is what’s called an “intermediate language,” or IL. According to a paper that describes the language’s concepts, it’s intended to allow the programmer to write OpenCL code by way of high-level abstractions that map to OpenCL concepts. It’s also possible for users to manually declare functions written in “a subset of the C language operating on non-array data types.”

When Lift code is compiled to OpenCL, it’s automatically optimized by iterating through many possible versions of the code, then testing their actual performance. This way, the results are not optimized in the abstract, as it were, for some given piece of hardware. Instead, they’re based on the actual performance of that algorithm.

One stated advantage for targeting multiple hardware architectures with a given algorithm is it allows the same distributioned application to run on a wider variety of hardware, and to take advantage of heterogenous architectures. If you have a system that has a mix of CPU, GPU, and FPGA hardware or two different kinds of GPU, the same application can in theory take advantage of all of those resources simultaneously. The end result is easier to deploy, since it’s not confined to any one kind of setup.

 

 

[Source:- Javaworld]

New MacBook Pro 2016 release date, UK price and tech specs | Complete guide to new MacBook Pro: MacBook Pro to get more RAM, Kaby Lake chips and price cut in 2017?

Image result for New MacBook Pro 2016 release date, UK price and tech specs | Complete guide to new MacBook Pro: MacBook Pro to get more RAM, Kaby Lake chips and price cut in 2017?

CONTENTS

  • New MacBook Pro announced!
  • Design
  • New features
  • Tech specs and performance
  • UK release date
  • UK prices
  • Macworld podcast – Apple’s 27 Oct launch event
  • MacBook Pro 2017
  • Read the event live blog

What are the prices, tech specs and new features of the new MacBook Pro 2016? And for that matter, when will the new MacBook Pro 2017 be released in the UK?

Welcome to our complete UK guide to the new MacBook Pro 2016, in which we cover everything you need to know about Apple’s new MacBook Pro models: UK prices and best deals, where to buy, new features, tech specs and performance stats. You can read more here: New MacBook Pro 2016 review.

But we’re not standing still now that 2016’s new MacBook Pro has been launched, and we’re already looking ahead to the next update. Later in this article we round up and analyse all the rumours related to the new MacBook Pro 2017 – its release date, specs, design, likely pricing and new features.

Updated, 30 November 2016, to expand our thoughts on the spec bump and price cut we expect the MacBook Pro to get in 2017.

New MacBook Pro 2016 release date, UK price and tech specs: New MacBook Pro announced!

Apple announced a long awaited update to its MacBook Pro laptops at an event in San Francisco on 27 October. The laptops, both 13in and 15in, feature USB-C ports and a Retina display, multi-touch Touch Bar, a versatile strip display that replaces the escape, function keys and power keys of a regular qwerty keyboard. We’ll look at all these in more detail in this article.

On 2 November, Phil Schiller (senior VP of marketing at Apple) was interviewed by The Independent and revealed the company’s plans and reaction to the MacBook Pro’s announcement. A key point raised in the interview is that Mac and iOS devices will always be separate from one another: the California-based company won’t try to integrate the two. Schiller also talks about the removal of the SD card and why Apple chose to keep the 3.5mm headphone jack.

New MacBook Pro 2016 release date, UK price and tech specs: Design

This is the first time a MacBook Pro will not include standard USB ports (that is to say, USB-A, the version we’re all used to), with both models featuring four USB-C ports which also serve as Thunderbolt 3 ports. This means the MacBook Air is now the only current-generation Apple laptop with standard USB ports. (Apple does still sell a few MacBook models from the previous generation, though, including the 2015 MacBook Pro models which feature the older USB ports: here’s the 2015 13-inch model, and here’s the 15-inch one.)

There is, thankfully, a headphone jack on the new MacBook Pro. The set-up is largely the same as on the current 12in MacBook, which has one USB-C and one headphone jack as its only ports – the Pro just gets a few more of those USB-C ports (either 2 or 4, depending on which model you go for). The new MacBook Pro no longer features MagSafe charging or an SD card slot.

 

Much like the 12in MacBook, the MacBook Pro now has butterfly mechanism keys, allowing for less travel and a thinner chassis. Apple says these second-generation butterfly keys improve on the typing experience from the 12in MacBook range.

The 13in model is 14.9mm thick, 17 percent thinner than the previous generation, and its volume is 23 percent less. It weighs 1.36kg.

The 15in model is 15.5mm thick and 20 percent less in volume than the last generation. It weighs only 1.81kg, which is very light for a 15in laptop. Apple has also added a larger Force Touch trackpad to this version.

The addition of the metal Apple logo on the casing means the iconic light-up Apple logo is no longer included on the MacBook Pro range. The 13in MacBook Air is now the last surviving MacBook to have a light-up logo, unless you count last year’s MacBook Pro.

 

 

 
[Source:- Macworld]

Xbox One sells 1 million consoles in November, neck-and-neck with PS4

NPD Group analyses and tracks console sales in the US month by month, and while it was widely reported Sony’s PlayStation 4 won November, hard sales figures have been difficult to lock down. Until now!

Our trusted source informed us that Microsoft shifted 1 million Xbox One units during November, which enjoyed some steep Black Friday discounts. Sony launched the beefed up PS4 Pro during this month, yet the PlayStation family only managed to outpace the Xbox One by 100,000 units at 1.1 million. Additionally, PlayStation VR which now competes with Windows-based VR solutions managed to sell roughly 68,000 units in the same month.

The fact Xbox is still pacing closely to PlayStation despite the launch of the more powerful PS4 Pro is quite impressive. The Xbox One S enjoyed various new bundles to entice consumers, and it looks as though Microsoft did well to head off competition from PlayStation. The interest in the incrementally more powerful PlayStation 4 Pro also bodes well for Microsoft’s next Xbox — Project Scorpio — which will land in 2017 with a significant raw power advantage.

Soon, Microsoft and Sony will compete head to head in the VR space as well, following the reveal of the minimum specs required for new Windows 10-based VR solutions coming in 2017. It’s unknown how VR will manifest on Project Scorpio, which is touted to provide “high-fidelity” VR experiences, but we’ll no doubt learn more at E3 2017 next summer.

Both Xbox One and PlayStation 4 are looking forward to a stellar holiday season either way, and it will be interesting to see whether Xbox One can pull ahead again in December.

 

 
[Source:- Windowscentral]

Best practices for upgrading after SQL Server 2005 end of life

Image result for Best practices for upgrading after SQL Server 2005 end of life

2005 customers to upgrade, which begs the question: Which version should they upgrade to?

Joseph D’Antoni, principal consultant at Denny Cherry & Associates Consulting, recommended upgrading to the latest SQL Server version that supports the third-party applications a company is running. He said there are big changes in between each of the versions of SQL Server, adding that SQL Server 2014 is particularly notable for the addition of the cardinality estimator. According to D’Antoni, the cardinality estimator can “somewhat drastically” change query performance for some types of data. However, the testing process is the same for all of the versions, and the same challenges — testing, time and licensing — confront any upgrade. “You’re going to have a long testing process anyway. You might as well try to get the latest version, with the longest amount of support.”

“If it were me, right now, contending with 2005, I would go to 2014,” said Robert Sheldon, a freelance writer and technology consultant. “It’s solid, with lots of good features. There would be no reason to go with 2012, unless there were some specific licensing circumstances that were a factor.” Denny Cherry, founder and principal consultant at Denny Cherry & Associates Consulting, recommended upgrading to SQL Server 2012 at the earliest, if not 2014, because “at least they won’t have to worry about upgrading again anytime soon.”

Although SQL Server 2014 is the most current SQL Server version, SQL Server 2016 is in community technology preview. Sheldon said he doesn’t see upgrading to SQL Server 2016 as a good strategy. “Those who want to upgrade to SQL Server 2016 face a bit of a dilemma, because it is still in preview, and I have not yet heard of a concrete release date,” he said. “An organization could use CTP 3.0 to plan its upgrade strategy, but I doubt that is something I would choose to do.”

D’Antoni considered the possibility of waiting until the release of SQL Server 2016 to upgrade. “If they identify a feature that’s compelling, maybe they should wait for 2016,” he said. He added that “2016 is mature enough to roll,” and the only real problem is that it is currently unlicensed.

“If they’re already out of support and planning on moving to 2016, it could be worth waiting the few months,” Cherry said. Furthermore, Cherry said, waiting for SQL Server 2016 could save an organization from having to go through a second upgrade in the future.

Cherry added that, for everyone not waiting for SQL Server 2016, “If they haven’t started the project yet, they should get that project started quickly.” D’Antoni had an even more advanced timetable. He said a company “probably should have started already.” He added, “It’s the testing process that takes a lot of time. The upgrade process is fairly straightforward. Testing the application to make sure it works should have started fairly early.” Ideally, D’Antoni said, by this point, organizations should have done some initial application testing and be planning their migration.

A number of Cherry’s clients, ranging from small businesses to large enterprises, are upgrading because of the approaching SQL Server 2005 end of life. He described SQL Server 2005 end of life as affecting “every size, every vertical.” D’Antoni predicted the small organizations and the largest enterprises will be the hardest hit. The small corporations, he said, are likely to be using SQL Server 2005, because they lack the resources and IT personnel for an easy upgrade. The large enterprises, on the other hand, have so many systems that upgrades become difficult.

D’Antoni explained that, while it is possible to migrate to an Azure SQL database in the cloud instead of upgrading to a more advanced on-premises version of SQL Server, he doesn’t expect to see much of that — not because of difficulties with the product, but because of company culture. Companies who use the cloud, he said, are “more forward-thinking. If you’re still running 2005, you tend to be less on top of things like that.”

 

 
[Source:- searchsqlserver]

LeEco planning major business changes as financial woes continue (Update: LeEco says it is committed to US market)

leeco-le-s3-hands-on-6-of-13

In a statement sent to The Wall Street Journal, a LeEco spokesperson stated that it is currently investigating why its stock price had dropped close to eight percent on Tuesday, and thus made the decision to halt trading. The spokesperson added that the company is “in the process of planning major matters, which are expected to involve integration of industry resources.” Exactly what types of changes are in the works are currently unknown.

Less than two months ago, LeEco was holding a big press conference in San Francisco, announcing its official launch of products in the US. That included its Le Pro3 and Le S3 Android smartphones. However, the press conference was an odd affair, full of unconnected buzzwords and demos of products like an Android-based bicycle that may now never come to market.

Since then, rumors about the company’s financial issues have continued, even though LeEco announced it had raised $600 million in new financing. Last week, sales of its phones began in the US at its own LeMall website, along with Amazon, Target and Best Buy. However, the company could decide to make an early exit from the US market as part of its upcoming changes.

 

 

[Source:- Androidauthority]

Glue, stitch, cobble: Weighing DIY container management

 

 

You’ve been tasked with helping your company stay competitive by modernizing your IT organization’s delivery of developed applications. Your company has already embraced virtualization and perhaps dabbled in the public cloud. Containers look like the next big thing for you, so you’re considering how to bring container technology to your organization. Some thing needs to create containers on compute resources and network them together. On the drawing board, you’re considering these general components:

You start doing the research. You soon discover that cloud management platforms, PaaS, and container management platforms are all readily available as prepackaged software and services. Even the individual components that make up those packages are available in Open Source Land. “Hmm,” you think, “Why pay anyone for a platform when the parts are there to do this myself?”

For a brief moment, you’re taken all the way back to kindergarten. The teacher starts crafting class and opens the drawer to an array of fun-looking parts. Pastel paper, glitter, and bows! You’re ready to craft that masterpiece. All you need is a bottle of glue!

After a blink, you’re back to the IT drawing board, laying out the parts for your future container management platform in greater detail:

  • Build tools
  • Servers/OSes
  • Container runtime
  • Container-to-container networking
  • Ingress networking
  • Firewall
  • Load balancer
  • Database
  • Storage
  • DNS

All you need is the “glue” to bind these parts together.

Naturally, connecting those different parts requires varying degrees of development effort. We’ve simplified this spectrum into four general “glue levels” of effort.

Glue level 1: Direct component-to-component bridging

In this case, a component has the capability to interface directly with the next logical component in the application deployment workflow.

Let’s assume you have a Jenkins platform and an instance of Docker Engine. Get Jenkins to build code, then create a Docker image. Better yet, have Jenkins call Docker Engine itself and point Docker to your newly created image.

Glue level 2: Basic scripting to bridge components

In this case, a component does not have the capability to interface with the next logical component in the application deployment workflow.

For example, with all nodes in an instance of a Docker Swarm, if a deployed service runs on port 80, then all nodes in the cluster lock down port 80, whether or not the particular node is running an instance of the container.

Let’s say you have another application that needs to listen on port 80. Because the whole Docker Swarm has already locked down port 80, you’ll have to use an external load balancer that’s tied in with DNS to listen to, for example, appA.mycluster.com and appB.mycluster.com (both listening at port 80 at the ingress side of the load balancer).

docker swarm portsApcera

After the containers have been deployed by an external script, you’ll have to interface with the load balancer to configure it to listen to the app and forward to the appropriate node.

Glue level 3: Scripting to manage components

In this case, your workflow finishes from one component and transitions to multiple separate components. At this point, you’re creating a middle-tier component that needs to maintain state and possibly coordinate workflows. You may have access to component automation (like HashiCorp’s Terraform or Red Hat CloudForms), but you still need a controlling entity that understands the application workflow and state.

Let’s say you have multiple Cloud Foundry instances with an application consisting of a web front-end container, a logic processing container, and an email generation container. You happen to want those containers on the separate Cloud Foundry instances. Even if you don’t need to create a cloud-spanning application, you may want to run applications in different clouds or move applications between clouds. This will require coordination outside of those platforms’ instances.

Assuming you’ve already laid the networking groundwork to connect those Cloud Foundry instances, your custom platform will have to interface with each instance, ship and run the containers, and network those containers appropriately.

Glue level 4: Your own enterprise automation at a level above the deployment workflow

In this case, you have enough glue for a basic start-to-finish workflow from source to deployment, but now you are considering enterprise-level features, including:

  • Automated provisioning and updating of the cluster control software, across multiple or hybrid clouds
  • Advanced container scheduling (for example, affinity between application containers)
  • Establishing role or attribute-based access controls that map organizational structures to rights on the platform
  • Resource quotas
  • Automatic firewall/IPtables management
  • Governance via a policy framework

Here is a simple example of one of the possibilities from a non-DIY alternative, the Apcera Platform. Let’s say your company has these business rules:

  1. Applications from development must run in AWS
  2. Applications in production must run in OpenStack (on-premises)

In the Apcera Platform, these business rules are translated by the admin as:

on job::/dev {
  schedulingTag.hard aws
}

on job::/production {
  schedulingTag.hard openstack
}

When a user (or automation) is identified as part of the /dev or /production namespace in the Apcera Platform, any applications deployed by that user (or automation) will be automatically deployed on the runtime components labeled with aws or openstack, appropriately. Users can either specify a tag when deploying applications (which will be checked by the policy system against a list of allowable tags) or not specify a tag and let the platform choose a runtime component automatically. Because Apcera labels can be arbitrarily defined, admins can create deployment policy for things like requiring “ssd” performance or “special-CPUs.”

Once you have built a platform that spans both AWS and OpenStack (as a single “cluster” or multiple clusters glued together), it may be an easy matter to allow operator choice of locations. With Docker Swarm, it’s quite easy with service constraints:

$ docker service create \
  —name redis_2 \
  —constraint ‘node.id == 2ivku8v2gvtg4’ \
  Redis:3.0.6

In this example, an operator chooses to deploy Redis via Docker Swarm to the specified Docker Engine node. While this is great for operator choice, this choice is not enforced. How do you enforce the company policy of deploying only to the on-premises OpenStack instance if this is a production job (per the company policy, above)?

How long are you willing to wait for the community (in this specific case, brokered through Docker Inc.) to implement this type of enforcement?

Let’s assume you’re left with coding this simple placement policy enforcement yourself. Let’s consider the planning for this effort:

  • You’d have to lock out all general access to Docker except for your enforcement automation.
  • Your enforcement automation has to be some kind of server that can proxy requests from clients.
  • You’d need to identify clients as individual users or members of some group. Do you want to run your own user management or create an interface to your enterprise LDAP server?
  • You’d need to associate the user/group with “production.”
  • You’d need to create a rule framework that permits an entry that translates to “jobs from production can only deploy to OpenStack Docker nodes.”
  • You’d need to create a way to track the node.ids of the Docker Swarm nodes that run on OpenStack.
  • You’d need to keep track of the resources available on each node to see if they can handle the resource requirements of your Docker image.
  • You’d need to understand the resource requirements of your Docker image.

What if, instead of a hard requirement that applications have to run on specific nodes, that sometimes you’re OK with a soft requirement? That is, make a best effort to deploy on the specified nodes, but failing that, deploy on other nodes. Do you really want to write your own scheduler code to fill in the gaps between what Docker offers? Apcera does all of this container management scheduling (via hard and soft tags, and more) already.

All of this glue code you’d have to create yourself, simply to solve the problem of enforcing where your applications can run. What about enforcing build dependencies as policy? Or a resource quota policy? Or virtual networking policy? Do you really want to write your own policy engine? Apcera was created to not only automate these tasks, but to provide a unified policy framework to govern all of them.

 

 

[Source:- Javaworld]