Best value Mac: Which is the best £1249 Mac to buy

Apple sells a number of Macs at the magic price point of £1,249 (in the US that’s $1,299). We wonder why Apple set on £1,249 as a price which so many Macs would have in common. We don’t have the answer to that, but we do have the answer of which Mac offers the best value for money when compared to other Macs sold for £1,249. Read on to find out.

While you can buy cheaper Macs, those Macs are often so undervalued that the saving really isn’t adequate enough to justify the compromise on features. For example, the entry level iMac costs £1,049, but lacks so many of the features of the next model up that we would always recommend ignoring that model.

Similarly, there are older Macs that Apple is still selling at a lower price, like the Mac mini and the MacBook Air, but we would warn against buying these Macs as we think that they are priced higher than they should be, considering their age.

So, given that we would rule out any Mac that costs less than £1,249, here’s how the line up of £1,249 Macs shapes up.

We also have the following Mac Buying Advice that might be useful to you:

  • Best cheap Mac
  • Best Pro Mac
  • Best Mac buying guide
  • Best Mac desktop
  • Best Mac laptop

Apple’s £1,249 Macs

Apple sells a MacBook, MacBook Pro, and an iMac at £1,249. The company also offers build-to-order options on the MacBook Air and the Mac mini that allow you to upgrade the components in those models for the same price.

  • 12-inch MacBook with 1.2GHz Intel m3 Kaby Lake Processor (Turbo Boost up to 3.0GHz), 256GB Storage, 8GB RAM, and integrated Intel HD Graphics 615. It’s been on sale since June 2017.  Reviewed here.
  • 13-inch MacBook Pro with 2.3GHz Intel i5 Kaby Lake dual-core processor (Turbo Boost up to 3.6GHz), 128GB Storage, 8GB RAM, and integrated Intel Iris Plus Graphics 640. On sale since June 2017. Reviewed here.
  • MacBook Air with 1.8GHz Intel Core i5 Broadwell dual-core processor (Turbo Boost up to 2.9GHz), 512GB Storage, 8GB RAM, and Intel HD Graphics 6000. This model has been tweaked but it’s essentially the same computer as the one Apple started selling in 2015. Here we have added more storage as a build-to-order option to bring it up to the magic £1,249 price point. Reviewed here.
  • 21.5-inch iMac with Retina 4K Display, 3.0GHz i5 Kaby Lake quad-core Processor (Turbo Boost up to 3.5GHz), 1TB Storage, discreet Radeon Pro 555 graphics card with 2GB video memory. On sale since June 2017. Reviewed here.
  • The most expensive Mac mini costs £949, but with build-to-order options you can spend close to £1,249 and get a 3.0GHz i7 dual-core processor (Turbo Boost up to 3.5GHz), 8GB RAM, 2TB Fusion Drive, 8GB RAM, Intel Iris Graphics, and if you add in an Apple HDMI to DVI Adapter it brings the price up to £1,248. The Mac mini hasn’t changed since this generation was introduced in 2014. Reviewed here.

Which of these options will offer your the best value for money? Primarily, it depends on your needs and what you intend to be doing with the Mac. You might benefit more from particular ports than faster processor, for example, or a discreet graphics card might be the deal-breaker for you, rather than an integrated option. We evaluate the various specifications below.

Best processor

If you are after the most powerful Mac you can get for £1,249, then the first place to look is the processor.

Although it should be noted that there are other factors that might effect the speed of your Mac, or its ability to process data. For example, flash storage can speed up operations making a laptop feel faster than a more powerful Mac desktop that has a standard hard drive.

You’ll find the best processor in the 21.5-inch iMac – an 3.0GHz quad-core i5 Kaby Lake processor.

Not only is it a higher processor speed, there are four cores, where all the other options have only two cores.

However, if you note the Turbo Boost rating, the 2.3GHz MacBook Pro is actually higher: 3.6GHz rather than 3.5GHz. So if you were after a laptop rather than a desktop this would certainly be a good option.

There’s another option that you might be considering: you could choose the Mac mini option we have priced up above, with it’s 3.0GHz i7 dual-core processor, Turbo Boost to 3.5GHz. Would this be a better deal thanks to its i7 processor? I7 is generally a better option than i5, and worth it if you have the choice, but in this case we wouldn’t recommend it as this Mac mini hasn’t been updated since 2014, so this is an older processor that would not perform as well as the Kaby Lake generation.

Best graphics

The best graphics option for your money is the iMac. This is because the iMac is the only Mac in this list to offer a discreet graphics card, in other words, a graphics card that isn’t built on to the processor and has it’s own RAM that it can access independently.

If you will be using your Mac for gaming, graphics or video then we’d recommend the iMac as the best option from this line up.

Not only does the iMac offer you a discreet graphics card (in the form of the Radeon Pro 555) you’ll also get the gorgeous 4K Retina display on which to admire the graphics, and we’ll discuss that next.

Best display

Hands down the best display on offer here is the one on the 21.5-inch iMac. This display isn’t just better than any other display available for £1,249, it’s even better than the previous year’s iMac too (just in case you were thinking of saving a few pennies by buying last years model from the Apple Refurbished Store).

Apple has made improvements to the 2017 4K display on the iMac compared to the previous generation. The display, which still gives a resolution of 4,096×2,304 pixels (that’s 219 pixels per inch), now offers 500nits and 10-bit dithering, which basically means that it’s 43 per cent brighter and capable of reproducing even more colours – one billion colours, according to Apple. It’s a P3 display, which means it is capable of producing colours outside the sRGB colour gamut.

Apple says that these new iMac displays will offer an “even more vivid and true-to-life viewing experience”.

The iMac isn’t the only Mac here to offer a Retina display though. If you are looking for a laptop with a great display you have two choices from this line up: the MacBook Pro and the MacBook. (The MacBook Air display is just 1440×900 pixels and the Mac mini doesn’t ship with a display).

The difference between the 12-inch MacBook and the 13-inch MacBook Pro screen doesn’t just boil down to an inch of extra space diagonally.

The 13-inch MacBook Pro screen offers 2560×1600 native resolution at 227 pixels per inch while the 12-inch MacBook offers 2304×1440 resolution at 226 pixels per inch. The number of pixels per inch is very similar, and you’ll note, slightly higher than the iMac (incidentally the 5K display on the 27.5-inch iMac offers 5120×2880 at 218 pixels per inch).

We don’t think a few extra pixels per inch is going to make a massive difference to the way you see the image though – Apple’s term Retina display is supposed to signify that its impossible for your eye to detect the pixels from a typical viewing distance.

Your choice then should be based on the size of the screen you need. If the 12-inch display on the MacBook feels to cramped then perhaps the 13-inch MacBook Pro would be the better choice, but remember that you can always plug in a separate display and use that when you are at your desk.

And if your plan was to plug in a separate display then perhaps the MacBook Pro would be a better option because you have the luxury of more than one port to plug in to, which takes us on to our next point.

s storage option combines flash and a hard drive to give you the best of both worlds. If you really need the extra storage then, perhaps, just this once the Mac mini is a good option. However, we’d still advise avoiding it due to the age of this model.

Buying advice

It’s pretty clear once you look at the specs of the different machines that the best Mac you can get for £1,249 is the iMac with it’s beautiful screen, fast quad-core processor, discreet graphics card, new and old generations of USB and Thunderbolt, and more.

However, if you don’t want a desktop computer the choice is between the MacBook Pro and the MacBook, in this case we’d opt for the 13-inch MacBook Pro, because it’s more powerful than it’s smaller sibling, and you won’t be so limited in terms of ports.


Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.


This system, which can be downloaded for free from, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (





Upcoming Windows 10 update reduces spying, but Microsoft is still mum on which data it specifically collects


There’s some good news for privacy-minded individuals who haven’t been fond of Microsoft’s data collection policy with Windows 10. When the upcoming Creators Update drops this spring, it will overhaul Microsoft’s data collection policies. Terry Myerson, executive vice president of Microsoft’s Windows and Devices Group, has published a blog post with a list of the changes Microsoft will be making.

First, Microsoft has launched a new web-based privacy dashboard with the goal of giving people an easy, one-stop location for controlling how much data Microsoft collects. Your privacy dashboard has sections for Browse, Search, Location, and Cortana’s Notebook, each covering a different category of data MS might have received from your hardware. Personally, I keep the Digital Assistant side of Cortana permanently deactivated and already set telemetry to minimal, but if you haven’t taken those steps you can adjust how much data Microsoft keeps from this page.

Second, Microsoft is condensing its telemetry options. Currently, there are four options — Security, Basic, Enhanced, and Full. Most consumers only have access to three of these settings — Basic, Enhanced, and Full. The fourth, security, is reserved for Windows 10 Enterprise or Windows 10 Education. Here’s how Microsoft describes each category:

Security: Information that’s required to help keep Windows, Windows Server, and System Center secure, including data about the Connected User Experience and Telemetry component settings, the Malicious Software Removal Tool, and Windows Defender.

Basic: Basic device info, including: quality-related data, app compatibility, app usage data, and data from the Security level.

Enhanced: Additional insights, including: how Windows, Windows Server, System Center, and apps are used, how they perform, advanced reliability data, and data from both the Basic and the Security levels.

Full: All data necessary to identify and help to fix problems, plus data from the Security, Basic, and Enhanced levels.

That’s the old system. Going forward, Microsoft is collapsing the number of telemetry levels to two. Here’s how Myerson describes the new “Basic” level:

[We’ve] further reduced the data collected at the Basic level. This includes data that is vital to the operation of Windows. We use this data to help keep Windows and apps secure, up-to-date, and running properly when you let Microsoft know the capabilities of your device, what is installed, and whether Windows is operating correctly. This option also includes basic error reporting back to Microsoft.

Windows 10 will also include an enhanced privacy section that will show during start-up and offer much better granularity over privacy settings. Currently, many of these controls are buried in various menus that you have to manually configure after installing the operating system.

It’s nice that Microsoft is cutting back on telemetry collection at the basic level. The problem is, as Stephen J Vaughn-Nichols writes, Microsoft is still collecting a creepy amount of information on “Full,” and it still defaults to sharing all this information with Cortana — which means Microsoft has data files on people it can be compelled to turn over by a warrant from an organization like the NSA or FBI. Given the recent expansion of the NSA’s powers, this information can now be shared with a variety of other agencies without filtering it first. And while Microsoft’s business model doesn’t directly depend on scraping and selling customer data the way Google does, the company is still gathering an unspecified amount of information. Full telemetry, for example, may “unintentionally include parts of a document you were using when a problem occurred.” Vaughn-Nichols isn’t thrilled about that idea, and neither am I.

The problem with Microsoft’s disclosure is it mostly doesn’t disclose. Even basic telemetry is described as “includes data that is vital to the operation of Windows.” Okay. But what does that mean?

I’m glad to see Microsoft taking steps towards restoring user privacy, but these are small steps that only modify policies around the edges. Until the company actually and meaningfully discloses what telemetry is collected under Basic settings and precisely what Full settings do and don’t send in the way of personally identifying information, the company isn’t explaining anything so much as it’s using vague terms and PR in place of a disclosure policy.

As I noted above, I’d recommend turning Cortana (the assistant) off. If you don’t want to do that, you should regularly review the information MS has collected about you and delete any items you don’t want to part of the company’s permanent record.



[Source:- Extremetech]

Which freaking big data programming language should I use?

Which freaking big data programming language should I use?

You have a big data project. You understand the problem domain, you know what infrastructure to use, and maybe you’ve even decided on the framework you will use to process all that data, but one decision looms large: What language should I choose? (Or perhaps more pointed: What language should I force all my developers and data scientists to suffer?) It’s a question that can be put off for only so long.

Sure, there’s nothing stopping you from doing big data work with, say, XSLT transformations (a good April Fools’ suggestion for tomorrow, simply to see the looks on everybody’s faces). But in general, there are three languages of choice for big data these days — R, Python, and Scala — plus the perennial stalwart enterprise tortoise of Java. What language should you choose and why … or when?

Here’s a rundown of each to help guide your decision.


R is often called “a language for statisticians built by statisticians.” If you need an esoteric statistical model for your calculations, you’ll likely find it on CRAN — it’s not called the Comprehensive R Archive Network for nothing, you know. For analysis and plotting, you can’t beat ggplot2. And if you need to harness more power than your machine can offer, you can use the SparkR bindings to run Spark on R.

However, if you are not a data scientist and haven’t used Matlab, SAS, or OCTAVE before, it can take a bit of adjustment to be productive in R. While it’s great for data analysis, it’s less good at more general purposes. You’d construct a model in R, but you would consider translating the model into Scala or Python for production, and you’d be unlikely to write a clustering control system using the language (good luck debugging it if you do).


If your data scientists don’t do R, they’ll likely know Python inside and out. Python has been very popular in academia for more than a decade, especially in areas like Natural Language Processing (NLP). As a result, if you have a project that requires NLP work, you’ll face an embarrassing number of choices, including the classic NTLK, topic modeling with GenSim, or the blazing-fast and accurate spaCy. Similarly, Python punches well above its weight when it comes to neural networking, withTheano and Tensorflow; then there’s scikit-learn for machine learning, as well asNumPy and Pandas for data analysis.

There’s Juypter/iPython too — the Web-based notebook server that allows you to mix code, plots, and, well, almost anything, in a shareable logbook format. This had been one of Python’s killer features, although these days, the concept has proved so useful that it has spread across almost all languages that have a concept of Read-Evaluate-Print-Loop (REPL), including both Scala and R.

Python tends to be supported in big data processing frameworks, but at the same time, it tends not to be a first-class citizen. For example, new features in Spark will almost always appear at the top in the Scala/Java bindings, and it may take a few minor versions for those updates to be made available in PySpark (especially true for the Spark Streaming/MLLib side of development).

As opposed to R, Python is a traditional object-oriented language, so most developers will be fairly comfortable working with it, whereas first exposure to R or Scala can be quite intimidating. A slight issue is the requirement of correct white-spacing in your code. This splits people between “this is great for enforcing readability” and those of us who believe that in 2016 we shouldn’t need to fight an interpreter to get a program running because a line has one character out of place (you might guess where I fall on this issue).


Ah, Scala — of the four languages in this article, Scala is the one that leans back effortlessly against the wall with everybody admiring its type system. Running on the JVM, Scala is a mostly successful marriage of the functional and object-oriented paradigms, and it’s currently making huge strides in the financial world and companies that need to operate on very large amounts of data, often in a massively distributed fashion (such as Twitter and LinkedIn). It’s also the language that drives both Spark and Kafka.

As it runs in the JVM, it immediately gets access to the Java ecosystem for free, but it also has a wide variety of “native” libraries for handling data at scale (in particular Twitter’s Algebird and Summingbird). It also includes a very handy REPL for interactive development and analysis as in Python and R.

I’m very fond of Scala, if you can’t tell, as it includes lots of useful programming features like pattern matching and is considerably less verbose than standard Java. However, there’s often more than one way to do something in Scala, and the language advertises this as a feature. And that’s good! But given that it has a Turing-complete type system and all sorts of squiggly operators (‘/:’ for foldLeft and ‘:\’ forfoldRight), it is quite easy to open a Scala file and think you’re looking at a particularly nasty bit of Perl. A set of good practices and guidelines to follow when writing Scala is needed (Databricks’ are reasonable).

The other downside: Scala compiler is a touch slow, to the extent that it brings back the days of the classic “compiling!” XKCD strip. Still, it has the REPL, big data support, and Web-based notebooks in the form of Jupyter and Zeppelin, so I forgive a lot of its quirks.


Finally, there’s always Java — unloved, forlorn, owned by a company that only seems to care about it when there’s money to be made by suing Google, and completely unfashionable. Only drones in the enterprise use Java! Yet Java could be a great fit for your big data project. Consider Hadoop MapReduce — Java. HDFS? Written in Java. Even Storm, Kafka, and Spark run on the JVM (in Clojure and Scala), meaning that Java is a first-class citizen of these projects. Then there are new technologies like Google Cloud Dataflow (now Apache Beam), which until very recently supported Java only.

Java may not be the ninja rock star language of choice. But while they’re straining to sort out their nest of callbacks in their Node.js application, using Java gives you access to a large ecosystem of profilers, debuggers, monitoring tools, libraries for enterprise security and interoperability, and much more besides, most of which have been battle-tested over the past two decades. (I’m sorry, everybody; Java turns 21 this year and we are all old.)

The main complaints against Java are the heavy verbosity and the lack of a REPL (present in R, Python, and Scala) for iterative developing. I’ve seen 10 lines of Scala-based Spark code balloon into a 200-line monstrosity in Java, complete with huge type statements that take up most of the screen. However, the new lambda support in Java 8 does a lot to rectify this situation. Java is never going to be as compact as Scala, but Java 8 really does make developing in Java less painful.

As for the REPL? OK, you got me there — currently, anyhow. Java 9 (out next year) will include JShell for all your REPL needs.

Drumroll, please

Which language should you use for your big data project? I’m afraid I’m going to take the coward’s way out and come down firmly on the side of “it depends.” If you’re doing heavy data analysis with obscure statistical calculations, then you’d be crazy not to favor R. If you’re doing NLP or intensive neural network processing across GPUs, then Python is a good bet. And for a hardened, production streaming solution with all the important operational tooling, Java or Scala are definitely great choices.

Of course, it doesn’t have to be either/or. For example, with Spark, you can train your model and machine learning pipeline with R or Python with data at rest, then serialize that pipeline out to storage, where it can be used by your production Scala Spark Streaming application. While you shouldn’t go overboard (your team will quickly suffer language fatigue otherwise), using a heterogeneous set of languages that play to particular strengths can bring dividends to a big data project.


[Source:- JW]