Tim Sweeney is positively steam-ed about Microsoft’s Windows Cloud operating system

Image result for Tim,Sweeney,is,positively,steam-ed,about,Microsoft’s,Windows,Cloud,operating,system

Yesterday, we reported on Windows Cloud — a new version of Microsoft’s Windows 10 that’s supposedly in the works. Windows Cloud would be limited to applications that are available through the Windows Store and is widely believed to be a play for the education market, where Chromebooks are currently popular.

Tim Sweeney, the founder of Epic and lead developer on the Unreal Engine, has been a harsh critic of Microsoft and its Windows Store before. He wasted no time launching a blistering tirade against this new variant of the operating system, before Microsoft has even had a chance to launch the thing.

With all respect to Tim, I think he’s wrong on this for several reasons. First, the idea that the Windows Store is going to crush Steam is simply farcical. There is no way for Microsoft to simply disallow Steam or other applications from running in mainstream Windows without completely breaking Win32 compatibility in its own operating system. Smartphone manufacturers were able to introduce the concept of app stores and walled gardens early on. Fortune 500 companies, gamers, enthusiasts, and computer users in general would never accept an OS that refused to run Win32 applications.

The second reason the Windows Store is never going to crush Steam is that the Windows Store is, generally speaking, a wasteland where software goes to die. The mainstream games that have debuted on that platform have generally been poor deals compared with what’s available on other platforms (like Steam). There’s little sign Microsoft is going to change this anytime soon, and until it does, Steam’s near-monopoly on PC game distribution is safe.

Third, if Microsoft is positioning this as a play against Chrome OS, Windows Cloud isn’t going to debut on high-end systems that are gaming-capable in the first place. This is a play aimed at low-end ARM or x86 machines with minimum graphics and CPU performance. In that space, a locked-down system is a more secure system. That’s a feature, not a bug, if your goal is to build systems that won’t need constant IT service from trojans, malware, and bugs.

Like Sweeney, I value the openness and capability of the PC ecosystem — but I also recognize that there are environments and situations where that openness is a risk with substantial downside and little benefit. Specialized educational systems for low-end markets are not a beachhead aimed at destroying Steam. They’re a rear-guard action aimed at protecting Microsoft’s educational market share from an encroaching Google.

 

 

[Source:- Extremetech]

‘Trump & Dump’ program aims to profit off Trump tweets

The "Trump & Dump" artificial intelligence program identifies Trump's market-moving tweets, assesses instantaneous

Techies have devised a program to execute quickfire stock trades to take advantage of President Donald Trump’s Twitter habit of blackballing individual companies.

And the president’s tweets are saving puppies, since when the program earns money, the funds are donated to an animal welfare group.

The “Trump & Dump” artificial intelligence program identifies Trump’s market-moving tweets, assesses instantaneously whether the sentiment is positive or negative and then executes a speedy trade.

Ben Gaddis, president of Austin, Texas marketing and technology company T3, said the idea was sparked by watching Trump’s actions during his transition, when twitter attacks of companies such as Boeing and Lockheed Martin sent the share prices tumbling.

“Everyone is asking themselves how to deal with the unpredictability of Trump’s tweets,” Gaddis told AFP. T3’s response was to develop a “bot,” a piece of software that does automated tasks, to trade on the information.

The company has so far been pleased with the results, which yielded “significant winnings” on two occasions and a “slight” loss on a third trade, Gaddis said.

In early January, T3 scored a “huge” profit by betting Toyota’s share price would fall after Trump lambasted the automaker for building cars in Mexico, it said in a short video on the T3 website.

The time lag between the Trump tweet and T3 trade was only a second, according to a short video on the T3 website.

T3, which has pictures of numerous dogs on its website and describes itself as having “dog friendly offices” is donating the earnings from the bot-directed trades to American Society for the Prevention of Cruelty to Animals (ASPCA).

“So now, when President Trump tweets, we save a puppy,” the video.

 

 

 

[Source:- Phys.org]

Take a closer look at your Spark implementation

Take a closer look at your Spark implementation

Apache Spark, the extremely popular data analytics execution engine, was initially released in 2012. It wasn’t until 2015 that Spark really saw an uptick in support, but by November 2015, Spark saw 50 percent more activity than the core Apache Hadoop project itself, with more than 750 contributors from hundreds of companies participating in its development in one form or another.

Spark is a hot new commodity for a reason. Its performance, general-purpose applicability, and programming flexibility combine to make it a versatile execution engine. Yet that variety also leads to varying levels of support for the product and different ways solutions are delivered.

While evaluating analytic software products that support Spark, customers should look closely under the hood and examine four key facets of how the support for Spark is implemented:

  • How Spark is utilized inside the platform
  • What you get in a packaged product that includes Spark
  • How Spark is exposed to you and your team
  • How you perform analytics with the different Spark libraries

Spark can be used as a developer tool via its APIs, or it can be used by BI tools via its SQL interface. Or Spark can be embedded in an application, providing access to business users without requiring programming skills and without limiting Spark’s utility through a SQL interface. I examine each of these options below and explain why all Spark support is not the same.

Programming on Spark

If you want the full power of Spark, you can program directly to its processing engine. There are APIs that are exposed through Java, Python, Scala, and R. In addition to stream and graph processing components, Spark offers a machine-learning library (MLlib) as well as Spark SQL, which allows data tools to connect to a Spark engine and query structured data, or programmers to access data via SQL queries they write themselves.

A number of vendors offer standalone Spark implementations; the major Hadoop distribution suppliers also offer Spark within their platforms. Access is exposed either through a command line or Notebook interface.

But performing analytics on core Spark with its APIs is a time-consuming, programming-intensive process. While Spark offers an easier programming model than, say, native Hadoop, it still requires developers. Even for organizations with developer resources, deploying them to work on lengthy data analytics projects may amount to an intolerable hidden cost. With many organizations, programming on Spark is not an actionable course for this reason.

BI on Spark

Spark SQL is a standards-based way to access data in Spark. It has been relatively easy for BI products to add support for Spark SQL to query tabular data in Spark. The dialect of SQL used by Spark is similar to that of Apache Hive, making Spark SQL akin to earlier SQL-on-Hadoop technologies.

Although Spark SQL uses the Spark engine behind the scenes, it suffers from the same disadvantages as Hive and Impala: Data must be in a structured, tabular format to be queried. This forces Spark to be treated as if it were a relational database, which cripples many of the advantages of a big data engine. Simply put, putting BI on top of Spark requires the transformation of the data into a reasonable tabular format that can be consumed by the BI tools.

Embedding Spark

Another way to leverage Spark is to abstract away its complexity by embedding it deep into a product and taking full advantage of its power behind the scenes. This allows users to leverage the speed and power of Spark without needing developers.

This architecture brings up three key questions. First, does the platform truly hide all of the technical complexities of Spark? As a customer, one needs to examine all aspects of how you would create each step of the analytic cycle — integration, preparation, analysis, visualization, and operationalization. A number of products offer self-service capabilities that abstract away Spark’s complexities, but others force the analyst to dig down and code — for example, in performing integration and preparation. These products may also require you to first ingest all your data into the Hadoop file system for processing. This adds extra length to your analytic cycles, creates fragile and fragmented analytic processes, and requires specialized skills.

Second, how does the platform take advantage of Spark? It’s critical to understand how Spark is used in the execution framework. Spark is sometimes embedded in a fashion that does not have the full scalability of a true cluster. This can limit overall performance as the volume of analytic jobs increases.

Third, how are you protected for the future? The strength of being tightly coupled with the Spark engine is also a weakness. The big data industry moves quickly. MapReduce was the predominant engine in Hadoop for six years. Apache Tez became mainstream in 2013, and now Spark has become a major engine. Assuming the technology curve continues to produce new engines at the same rate, Spark will almost certainly be supplanted by a new engine within 18 months, forcing products tightly coupled to Spark to be reengineered — a far from trivial undertaking. Even with that effort put aside, you must consider whether the redesigned product will be fully compatible with what you’ve built in the older version.

The first step to uncovering the full power of Spark is to understand that not all Spark support is created equal. It’s crucial that organizations grasp the differences in Spark implementations and what each approach means for their overall analytic workflow. Only then can they make a strategic buying decision that will meet their needs over the long haul.

Andrew Brust is senior director of market strategy and intelligence at Datameer.

 

 

[Source:- IW]

US tech industry says immigration order affects their operations

Trump inauguration

The U.S. tech industry has warned that a temporary entry suspension on certain foreign nationals introduced on Friday by the administration of President Donald Trump will impact these companies’ operations that are dependent on foreign workers.

The Internet Association, which has a number of tech companies including Google, Amazon, Facebook and Microsoft as its members, said that Trump’s executive order limiting immigration and movement into the U.S. has troubling implications as its member companies and firms in many other industries include legal immigrant employees who are covered by the orders and will not be able to return back to their jobs and families in the U.S.

“Their work benefits our economy and creates jobs here in the United States,” said Internet Association President and CEO Michael Beckerman in a statement over the weekend.

Executives of a number of tech companies like Twitter, Microsoft and Netflix have expressed concern about the executive order signed by Trump, which suspended for 90 days entry into the U.S. of persons from seven Muslim-majority countries – Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen – as immigrants and non-immigrants. The Trump administration has described the order as a move to prevent foreign terrorist entry into the U.S.

Tech companies like Uber, Apple, Microsoft and Google are in touch with employees affected by the order, according to reports. Uber is working on a scheme to compensate some of its drivers who come from the listed countries and had taken long breaks to see their extended families and are now unable to come back to the U.S., wrote CEO Travis Kalanick, who is a member of Trump’s business advisory group.

“As an immigrant and as a CEO, I’ve both experienced and seen the positive impact that immigration has on our company, for the country, and for the world,” wrote Satya Nadella, Microsoft CEO, in an online post over the weekend. “We will continue to advocate on this important topic.” Netflix CEO Reed Hastings wrote in a Facebook post that “Trump’s actions are hurting Netflix employees around the world, and are so un-American it pains us all.”

The tech industry is also concerned about further moves by the government on immigration policy that could place restrictions on visas for the entry of people who help these companies run their operations and develop products and services. The H-1B visa program have been criticized for replacing U.S. workers.

Microsoft’s Chief Legal Officer Brad Smith said in a note to employees on Saturday that the company believes in “a strong and balanced high-skilled immigration system.”

 

[Source:- Javaworld]

 

Who makes the most reliable hard drives?

blog-lifetime-by-drive-size

Backblaze is back again, this time with updated hard drive statistics and failure rates for all of 2016. Backblaze’s quarterly reports on HDD failure rates and statistics are the best data set we have for measuring drive reliability and performance, so let’s take a look at the full year and see who the winners and losers are.

Backblaze only includes hard drive models in its report if it has at least 45 drives of that type, and it currently has 72,100 hard drives in operation. The slideshow below explains and steps through each of Backblaze’s charts, with additional commentary and information. Each slide can be clicked to open a full-size version in a new window.

Backblaze has explained before that it can tolerate a relatively high failure rate before it starts avoiding drives altogether, but the company has been known to take that step (it stopped using a specific type of Seagate drive at one point due to unacceptably high failure rates). Current Seagate drives have been much better and the company’s 8TB drives are showing an excellent annualized failure rate.

Next, we’ve got something interesting — drive failure rates plotted against drive capacity.

The “stars” mark the average annualized failure rate for all of the hard drives for each year.

The giant peak in 3TB drive failures was driven by the Seagate ST3000DM001, with its 26.72% failure rate. Backblaze actually took the unusual step of yanking the drives after they proved unreliable. With those drives retired, the 3GB failure rate falls back to normal.

One interesting bit of information in this graph is that drive failure rates don’t really shift much over time. The shifts we do see are as likely to be caused by Backblaze’s perpetual rotation between various manufacturers as old drives are retired and new models become available. Higher capacity drives aren’t failing at statistically different rates than older, smaller drives, implying that buyers don’t need to worry that bigger drives are more prone to failure.

The usual grain of salt

As always, Backblaze’s data sets should be taken as a representative sample of how drives perform in this specific workload. Backblaze’s buying practices prioritize low cost drives over any other type, and they don’t buy the enterprise drives that WD, Seagate, and other manufacturers position specifically for these kinds of deployments. Whether or not this has any impact on consumer drive failure rates isn’t known — HDD manufacturers advertise their enterprise hardware as having gone through additional validation and being designed specifically for high-vibration environments, but there are few studies on whether or not these claims result in meaningfully better performance or reliability.

 

Backblaze’s operating environment has little in common with a consumer desktop or laptop, and may not cleanly match the failure rates we would see in these products. The company readily acknowledges these limitations, but continues to provide its data on the grounds that having some information about real-world failure rates and how long hard drives live for is better than having none at all. We agree. Readers often ask which hard drive brands are the most reliable, but this information is extremely difficult to come by. Most studies of real-world failure rates don’t name brands or manufacturers, which limits their real-world applicability.

 

[Source:- Extremetech]

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.

 

This system, which can be downloaded for free from www.3dcurator.es, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (patrimonio3d.ugr.es/).

 

 

[Source:- Phys.org]

 

Google Cloud SQL provides easier MySQL for all

Google Cloud SQL aims to provide easier MySQL for all

With the general availability of Google Cloud Platform’s latest database offerings — the second generation of Cloud SQL, Cloud Bigtable, and Cloud Datastore — Google is setting up a cloud database strategy founded on a basic truth of software: Don’t get in the customer’s way.

For an example, look no further than the new iteration of Cloud SQL, a hosted version of MySQL for Google Cloud Platform. MySQL is broadly used by cloud applications, and Google is trying to keep it fuss-free — no small feat for any piece of software, let alone a database notorious in its needs for tweaks to work well.

Most of the automation around MySQL in Cloud SQL involves items that should be automated anyway, such as updates, automatic scaling to meet demand, autofailover between zones, and backup/roll-back functionality. This automation all comes via a recent version of MySQL, 5.7, not via an earlier version that’s been heavily customized by Google to support these features.

The other new offerings, Cloud Datastore and Cloud Bigtable, are fully managed incarnations of NoSQL and HBase/Hadoop systems. These systems have fewer users than MySQL, but are likely used to store gobs more data than with MySQL. One of MySQL 5.7’s new features, support for JSON data, provides NoSQL-like functionality for existing MySQL users. But users who are truly serious about NoSQL are likely to do that work on a platform designed to support it from the ground up.

The most obvious competition for Cloud SQL is Amazon’s Aurora service. When reviewed by InfoWorld’s Martin Heller in October 2015, it supported a recent version of MySQL (5.6) and had many of the same self-healing and self-maintaining features as Cloud SQL. Where Google has a potential edge is in the overall simplicity of its platform — a source of pride in other areas, such as a far less sprawling and complex selection of virtual machine types.

Another competitor is Snowflake, the cloud data warehousing solution designed to require little user configuration or maintenance. Snowflake’s main drawback is that it’s a custom-build database, even if it is designed to be highly compatible with SQL conventions. Cloud SQL, by contrast, is simply MySQL, a familiar product with well-understood behaviors.

 

 

 

[Source:- IW]

Google creates ‘crisis fund’ following US immigration ban

Image result for Google creates ‘crisis fund’ following US immigration ban

Tech giant Google has created a US$2 million crisis fund in response to US president Donald Trump’s immigration ban.

Google staff are also being invited to top up the fund, with the money going towards the American Civil Liberties Union (ACLU), Immigrant Legal Resource Center (ILRC), International Rescue Committee (IRC), and the UN High Commissioner for Refugees (UNHCR).

“We chose these organisations for their incredible efforts in providing legal assistance and support services for immigrants, as well as their efforts on resettlement and general assistance for refugees globally,” a Google spokesperson said.

The announcement follows requests by Google CEO, Sundar Pichai last week for staff travelling overseas to come back to the US. More than 100 staff are affected by President Trump’s executive order on immigration.

Since 2015, Google has given more than US$16 million to organisations focused on humanitarian aid for refugees on the ground, WiFi in refugee camps, and education for out of school refugee children in Lebanon, the spokesperson said.

Microsoft CEO Satya Nadella has also responded to the crisis, saying that as an immigrant himself, he has experienced the positive impact that immigration has on the company, the country and the world.

Nadella said Microsoft was providing legal advice and assistance to 76 staff who have a US visa and are citizens of Syria, Iraq, Iran, Libya, Somalia, Yemen, and Sudan.

In an email sent to Microsoft staff, US-based director, Brad Smith said that Microsoft believes in a strong and balance skilled immigration system.

“We also believe in broader-immigration opportunities, like the protections for talented and law-abiding young people under the Deferred Access for Childhood Arrivals (DACA) program. We believe that immigration laws can and should protect the public without sacrificing people’s freedom of expression or religion. And we believe in the importance of protecting legitimate and law-abiding refugees whose very lives may be at stake in immigration proceedings,” he said.

 

 

[Source:- Javaworld]

Nintendo stands by Switch’s sparse

3DS-Sales

Nintendo released its annual financial report this week, and president Tatsumi Kimishima defended the Switch’s sparse launch lineup, along with giving additional details on Nintendo’s mobile and console business performance. The Switch’s software lineup has been widely criticized for its unusually small size. Kimishima attempted to push back against this argument, saying:

Our thinking in arranging the 2017 software lineup is that it is important to continue to provide new titles regularly without long gaps. This encourages consumers to continue actively playing the system, maintains buzz, and spurs continued sales momentum for Nintendo Switch. April 28 Spring, 2017 Summer, 2017 For that reason, we will be releasing Mario Kart 8 Deluxe, ARMS, which is making its debut on the Nintendo Switch during the first half of 2017, and Splatoon 2, which attracted consumers’ attention most during the hands-on events in Japan, in summer 2017.

The problem with this argument is that the Switch’s lineup is painfully thin, no matter how Nintendo tries to paper over the issue. The North American Switch will launch with 10 titles:

  • 1-2 Switch
  • The Binding of Isaac: Afterbirth+
  • Human Resource Machine
  • Just Dance 2017
  • The Legend of Zelda: Breath of the Wild
  • Little Inferno
  • I am Setsuna
  • Skylanders: Imaginators
  • Super Bomberman R
  • World of Goo

The Wii U launched with 32 titles, while the PS4 had 25 and the Xbox One had 22. Clearly launch titles alone don’t make or break a console, or the Wii U would’ve beaten both its rivals. But consumers do tend to treat launch support as indicative of overall developer buy-in.

What’s perhaps more worrying is the way this problem doesn’t resolve through the end of 2017. There are more games coming through the rest of the year (17 in total), but comparatively few top-franchise games. Mario Kart 8 Deluxe is a warmed-over refresh of a two-year-old game, and Splatoon doesn’t have the mass market appeal of a Mario or Pokemon game. Super Mario Odyssey is the biggest post-launch game for Switch with a 2017 launch date, and it won’t drop until the holiday season. When you combine the weak game lineup with the high price ($300), accessory costs, and lack of a bundled game, it’s hard to make a strong argument for the handheld — especially since Nintendo remains resolute that the Switch isn’t a handheld at all.

This graph helps explain why. Nintendo sold roughly 2.1 million 3DS devices in 2016 in the US alone (Wikipedia estimates CY 2016 sales at 7.36 million devices worldwide). That’s vastly better than the Wii U, which saw a complete sales collapse this year, even in comparison with its previous anemic performance. As we’ve previously speculated, Nintendo literally can’t afford to quit on the 3DS, particularly with the Switch’s long-term sales strength so uncertain. The company continues to insist that the Switch and 3DS will exist concurrently, with separate libraries of games and different price points.

We suspect that this is little more than convenient fiction. Nintendo has proven perfectly happy to mislead the public about its plans in the past, arguing that the Nintendo DS wasn’t a replacement for the original Game Boy line, and more recently claiming that the Wii U would remain in production for the rest of the year when it ended hardware manufacturing well before that point. In both cases, the company was hedging its bets, giving itself room to pivot if a product didn’t take off. The monstrous success of Pokemon Sun and Moon explains the difference between FY 2016 and FY 2017 software sales for the 3DS — and also why Nintendo won’t step away from its established handheld until it knows it has a suitable replacement available. This  could prove to be a mistake; the Switch’s capabilities position it much more effectively as a high-end handheld than as a living room console.

If the Switch sells well, Nintendo can introduce a cost-reduced version that would compete more directly against the 3DS at a later point, if needed. Both platforms will remain in market through 2017, with more games arriving for 3DS throughout the year.

Nintendo also acknowledged it has had some trouble converting Super Mario Run’s success into sales. While 78 million people have downloaded the game, the conversion rate is reportedly ~5%. That’s still an entirely respectable four million paying customers, but Nintendo seems to have had higher hopes for its first mobile title. Given that Super Mario Run actually has an up-front price tag rather than a micropayment system, 5% conversion rates sound fairly solid to us.

Finally, Nintendo confirmed that it continues to have trouble stocking the NES Classic Edition, but still managed to sell 1.5 million of the consoles through the holiday season. Considering that store fronts still can’t keep the system in stock for more than a few minutes at a time, the company severely underestimated demand here.

 

 

[Source:- Extremetech]

Complex 3-D data on all devices

Complex 3-D data on all devices

A new web-based software platform is swiftly bringing the visualization of 3-D data to every device, optimizing the use of, for example, virtual reality and augmented reality in industry. In this way, Fraunhofer researchers have brought the ideal of “any data on any device” a good deal closer.

If you want to be sure that the person you are sending documents and pictures to will be able to open them on their computer, then you send them in PDF and JPG format. But what do you do with 3-D content? “A standardized option hasn’t existed before now,” says Dr. Johannes Behr, head of the Visual Computing System Technologies department at the Fraunhofer Institute for Computer Graphics Research IGD. In particular, industry lacks a means of taking the very large, increasingly complex volumes of 3-D data that arise and rendering them useful – and of being able to use the data on every device, from smartphones to VR goggles. “The data volume is growing faster than the means of visualizing it,” reports Behr. Fraunhofer IGD is presenting a solution to this problem in the form of its “instant3DHub” software, which allows engineers, technicians and assemblers to use spatial design and assembly plans without any difficulty on their own devices. “This will enable them to inspect industrial plants or digital buildings, etc. in real time and find out what’s going on there,” explains Behr.

Software calculates only visible components

On account of the gigantic volumes of data that have to be processed, such an undertaking has thus far been either impossible or possible only with a tremendous amount of effort. After all, users had to manually choose in advance which data should be processed for the visualization, a task then executed by expensive special software. Not exactly a cost-effective method, and a time-consuming one as well. With the web-based Fraunhofer solution, every company can adapt the visualization tool to its requirements. The software autonomously selects the data to be prepared, by intelligently calculating, for example, that only views of visible parts are transmitted to the user’s device. Citing the example of a power plant, Behr explains: “Out of some 3.5 million components, only the approximately 3,000 visible parts are calculated on the server and transmitted to the device.”

Such visibility calculations are especially useful for VR and AR applications, as the objects being viewed at any given moment appear in the display in real time. At CeBIT, researchers will be showing how well this works, using the example of car maintenance. In a VR application, it is necessary to load up to 120 images per second onto data goggles. In this way, several thousand points of 3-D data can be transmitted from a central database for a vehicle model to a device in just one second. The process is so fast because the complete data does not have to be loaded to the device, as used to be the case, but is streamed over the web. A huge variety of 3-D web applications are delivered on the fly, without permanent storage, so that even mobile devices such as tablets and smartphones can make optimal use of them. One key feature of this process is that for every access to instant3DHub, the data is assigned to, prepared and visualized for the specific applications. “As a result, the system fulfills user- and device-specific requirements, and above all is secure,” says Behr. BMW, Daimler and Porsche already use instant3DHub at over 1,000 workstations. Even medium-sized companies such as SimScale and thinkproject have successfully implemented “instantreality” and instant3Dhub and are developing their own individual software solutions on that basis.

Augmented reality is a key technology for Industrie 4.0

Technologies that create a link between CAD data and the real production environment are also relevant for the domain of augmented reality. “Augmented reality is a key technology for Industrie 4.0, because it constantly compares the digital target situation in real time against the actual situation as captured by cameras and sensors,” adds Dr. Ulrich Bockholt, head of the Virtual and Augmented Reality department at Fraunhofer IGD. Ultimately, however, the solution is of interest to many sectors, he explains, even in the construction and architecture field, where it can be used to help visualize building information models on smartphones, tablet computers or data goggles.

 

[Source:- Phys.org]