Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

Researchers from the UGR develop a new software which adapts medical technology to see the interior of a sculpture

A student at the University of Granada (UGR) has designed software that adapts current medical technology to analyze the interior of sculptures. It’s a tool to see the interior without damaging wood carvings, and it has been designed for the restoration and conservation of the sculptural heritage.

Francisco Javier Melero, professor of Languages and Computer Systems at the University of Granada and director of the project, says that the new software simplifies medical technology and adapts it to the needs of restorers working with wood carvings.

The software, called 3DCurator, has a specialized viewfinder that uses computed tomography in the field of restoration and conservation of sculptural heritage. It adapts the medical CT to restoration and it displays the 3-D image of the carving with which it is going to work.

Replacing the traditional X-rays for this system allows restorers to examine the interior of a statue without the problem of overlapping information presented by older techniques, and reveals its internal structure, the age of the wood from which it was made, and possible additions.

“The software that carries out this task has been simplified in order to allow any restorer to easily use it. You can even customize some functions, and it allows the restorers to use the latest medical technology used to study pathologies and apply it to constructive techniques of wood sculptures,” says professor Melero.

 

This system, which can be downloaded for free from www.3dcurator.es, visualizes the hidden information of a carving, verifies if it contains metallic elements, identifies problems of xylophages like termites and the tunnel they make, and detects new plasters or polychrome paintings added later, especially on the original finishes.

The main developer of 3DCurator was Francisco Javier Bolívar, who stressed that the tool will mean a notable breakthrough in the field of conservation and restoration of cultural assets and the analysis of works of art by experts in Art History.

Professor Melero explains that this new tool has already been used to examine two sculptures owned by the University of Granada: the statues of San Juan Evangelista, from the 16th century, and an Immaculate from the 17th century, which can be virtually examined at the Virtual Heritage Site Of the Andalusian Universities (patrimonio3d.ugr.es/).

 

 

[Source:- Phys.org]

 

Complex 3-D data on all devices

Complex 3-D data on all devices

A new web-based software platform is swiftly bringing the visualization of 3-D data to every device, optimizing the use of, for example, virtual reality and augmented reality in industry. In this way, Fraunhofer researchers have brought the ideal of “any data on any device” a good deal closer.

If you want to be sure that the person you are sending documents and pictures to will be able to open them on their computer, then you send them in PDF and JPG format. But what do you do with 3-D content? “A standardized option hasn’t existed before now,” says Dr. Johannes Behr, head of the Visual Computing System Technologies department at the Fraunhofer Institute for Computer Graphics Research IGD. In particular, industry lacks a means of taking the very large, increasingly complex volumes of 3-D data that arise and rendering them useful – and of being able to use the data on every device, from smartphones to VR goggles. “The data volume is growing faster than the means of visualizing it,” reports Behr. Fraunhofer IGD is presenting a solution to this problem in the form of its “instant3DHub” software, which allows engineers, technicians and assemblers to use spatial design and assembly plans without any difficulty on their own devices. “This will enable them to inspect industrial plants or digital buildings, etc. in real time and find out what’s going on there,” explains Behr.

Software calculates only visible components

On account of the gigantic volumes of data that have to be processed, such an undertaking has thus far been either impossible or possible only with a tremendous amount of effort. After all, users had to manually choose in advance which data should be processed for the visualization, a task then executed by expensive special software. Not exactly a cost-effective method, and a time-consuming one as well. With the web-based Fraunhofer solution, every company can adapt the visualization tool to its requirements. The software autonomously selects the data to be prepared, by intelligently calculating, for example, that only views of visible parts are transmitted to the user’s device. Citing the example of a power plant, Behr explains: “Out of some 3.5 million components, only the approximately 3,000 visible parts are calculated on the server and transmitted to the device.”

Such visibility calculations are especially useful for VR and AR applications, as the objects being viewed at any given moment appear in the display in real time. At CeBIT, researchers will be showing how well this works, using the example of car maintenance. In a VR application, it is necessary to load up to 120 images per second onto data goggles. In this way, several thousand points of 3-D data can be transmitted from a central database for a vehicle model to a device in just one second. The process is so fast because the complete data does not have to be loaded to the device, as used to be the case, but is streamed over the web. A huge variety of 3-D web applications are delivered on the fly, without permanent storage, so that even mobile devices such as tablets and smartphones can make optimal use of them. One key feature of this process is that for every access to instant3DHub, the data is assigned to, prepared and visualized for the specific applications. “As a result, the system fulfills user- and device-specific requirements, and above all is secure,” says Behr. BMW, Daimler and Porsche already use instant3DHub at over 1,000 workstations. Even medium-sized companies such as SimScale and thinkproject have successfully implemented “instantreality” and instant3Dhub and are developing their own individual software solutions on that basis.

Augmented reality is a key technology for Industrie 4.0

Technologies that create a link between CAD data and the real production environment are also relevant for the domain of augmented reality. “Augmented reality is a key technology for Industrie 4.0, because it constantly compares the digital target situation in real time against the actual situation as captured by cameras and sensors,” adds Dr. Ulrich Bockholt, head of the Virtual and Augmented Reality department at Fraunhofer IGD. Ultimately, however, the solution is of interest to many sectors, he explains, even in the construction and architecture field, where it can be used to help visualize building information models on smartphones, tablet computers or data goggles.

 

[Source:- Phys.org]

 

 

Microsoft’s latest Windows update breaks multi-monitor gaming

SeriousSam

We’ve been talking about the problem of forced non-security updates since Windows 10 launched, so I won’t belabor the point again here. Instead, I’d like to point out that the issue here isn’t even just a question of forcing an update — it’s about forcing updates that break existing system configurations. If you’ve used Microsoft Windows for any length of time, you’re aware that the OS has its own built-in mechanisms for determining which software and hardware are already installed in your machine. Try to install a Windows Update that’s already been installed, and the computer informs you of that fact. Try to install an application, and you get a similar message. If you try to install old graphics drivers on top of newer drivers and you’ll get an error message. Windows is required to know how many displays you have connected to it, or it wouldn’t be able to offer color profile management or an appropriately scaled desktop. Similarly, the OS has to remember which windows belong on which screens to display information appropriately and it has information on what kind of GPU is installed.

There is, in other words, no reason why Microsoft should be pushing this update as mandatory for people who game on multiple displays. In fact, given the company’s 18-month fetish for telemetry collection, there’s no reason why Redmond couldn’t notifygamers that they may not be able to play certain titles without using workarounds to do so. This hits one of the most annoying points of these so-called “service” models — despite calling it a “service,” the service doesn’t actually serve the end customer. If Microsoft wanted to get end-users onboard with its telemetry collection, it could start by using that data in ways that actually improve their customer experience.

But since Microsoft doesn’t do that, if you’re a widescreen gamer, your choices are to disable Windows Update altogether or to hope this update doesn’t impact any titles you like playing in that configuration. There aren’t many people playing games on more than one monitor, to be sure, but this kind of regression is why people don’t like mandatory updates in the first place. We’ve seen some signs of late that MS is bending a bit on this issue by giving people the ability to defer updates by 35 days once the Creators Update (Redstone 2) drops later this year. Hopefully that’s just the first step back towards a more sane update policy.

 

[Source:- Extremetech]

 

The Nintendo Switch will need its smartphone app for online matchmaking

The Nintendo Switch companion app is fast turning into a pretty essential part of the Switch.

As well as the previously announced news that you’ll need to use the app in order to enable voice-chat on the console, in a recent interview Nintendo of America’s President Reggie Fils-Aime suggested that the app would be used for a lot more besides voice chat.

In fact, the app’s functionality actually goes as far as enabling matchmaking and allowing you to create lobbies, suggesting that your online options are going to be pretty slim without your smartphone.

Smart (phone) justifications

Fils-Aime justified the decision to rely on the app for voice chat by saying that most people will have a headset that connects to their phone on them at all times.

As such using the phone for voice chat makes sense, as it means you don’t have to carry around an extra Switch-specific headset.

But while these justifications make a certain amount of sense for using the console while on the go, the same can’t be said for docked play, where people are used to having a dedicated headset and a console that can handle everything without needing accessories.

Fils-Aime’s use of the word ‘hotspot’ also suggests that Nintendo expects people to tether their console to their phone to get online while on the go, which might prove challenging for anyone with a limited amount of data.

It’s beginning to feel as though in its quest to make a hybrid console, the Nintendo Switch is fast becoming a device that has limitations in both form-factors.

We’ve contacted Nintendo to ask for clarification on what exactly the mobile app will enable, and what form of online play will be possible without the app.

 

[Source:- Techrader]

 

 

Snapchat is now using the third-party ad targeting it once called ‘creepy’

Snapchat is now accessing its users’ offline purchase data to improve the targeting of its ads, despite its CEO having previously deemed this kind of advertising “creepy.”

Following in the footsteps of tech and social media giants such as Facebook, Twitter, and Google, Snap Inc has partnered with a third party offline data provider called Oracle Data Cloud according to the Wall Street Journal.

This partnership will allow Snapchat advertisers to access data about what users buy offline in order to more accurately target ads.

Snapchat gets specific

Now rather than seeing generally less invasive advertisements appear on Snapchat which have a broad consumer appeal, you’re more likely to see ads that make you think “how did they know?” as you’ll now be assigned a specific consumer demographic such as “consumer tech purchaser.”

This decision shows the company is taking its growth seriously as it’s a different approach CEO Evan Spiegel laid out in June 2015. Back then, Spiegel stated his distaste for such personalized advertising saying “I got an ad this morning for something I was thinking about buying yesterday, and it’s really annoying. We care about not being creepy. That’s something that’s really important to us.”

Now, however, Snap Inc has to do all it can to guarantee that its stock is worth buying when it goes public later this year. Such an advertising approach is a good way to do so because it should make Snapchat a more attractive option to advertisers as targeted adverts are more likely to earn more per view.

Fortunately, if this kind of advertising doesn’t sit well with you whether because you consider it invasive or because you’re just incredibly susceptible, Snapchat is giving its users the ability to opt out. It’s already started rolling out the changed adverts so you’ll be able to change it now.

To do so, simply go into the settings section within the Snapchat app, go to Manage Preferences, select Ad Preferences and switch off the Snap Audience Match function.

 

 

[Source:- Techrader]

 

An app to crack the teen exercise code

An app to crack the teen exercise code

Pokémon GO has motivated its players to walk 2.8 billion miles. Now, a new mobile game from UVM researchers aims to encourage teens to exercise with similar virtual rewards.

The game, called “Camp Conquer,” is the brainchild of co-principal investigators Lizzy Pope, assistant professor in the Department of Nutrition and Food Science, and Bernice Garnett, assistant professor of education in the College of Education and Social Services, both of the University of Vermont. The project is one of the first in the area of gamification and obesity, and will test launch with 100 Burlington High School students this month.

Here’s how it works: Real-world physical activity, tracked by a Fitbit, translates into immediate rewards in the game, a capture-the-flag-style water balloon battle with fun, summer camp flair. Every step a player takes in the real world improves their strength, speed, and accuracy in the game. “For every hundred steps, you also get currency in the game to buy items like a special water balloon launcher or new sneakers for your avatar,” says Pope.

Helping Schools Meet Mandates

In 2014, Vermont established a requirement for students to get 30 minutes of physical activity during the school day (in addition to P.E. classes), a mark Pope says schools are struggling to hit. And it’s not just Vermont; according to the CDC, only 27 percent of high school students nationwide hit recommended activity goals, and 34 percent of US teens are overweight or obese.

Camp Conquer is a promising solution. The idea struck after Pope and Garnett visited Burlington High School, where they saw students playing lots of games on school-provided Chromebook laptops. Pope and Garnett approached Kerry Swift in UVM’s Office of Technology Commercialization for help. “I thought, if we’re going to make a game, it’s going to be legit,” says Pope.

Where Public Meets Private

The team is working with GameTheory, a local design studio whose mission is to create games that drive change. Pope says forming these types of UVM/private business partnerships to create technology that can be commercialized is the whole point of UVMVentures Funds, which partially support this project.

A key result of this public/private partnership, and of the cross-departmental collaboration between Pope and Garnett, was a methodology shift. Pope says it’s less common for health behavior researchers to involve their target demographic in “intervention design.” But Garnett, who has experience in community-based participatory research, and GameTheory, which commonly utilizes customer research, helped shift this. “Putting the experience of Bernice and GameTheory together, we came up with student focus groups to determine when they’re active, why they’re not, and what types of games they like to play,” says Pope. She believes this student input has Camp Conquer poised for success. “It gave us a lot of good insight, and created game champions.”

What does success look like? Pope says in her eyes, “it’s all about exciting kids to move more.” But another important aspect is the eventual commercialization of the app. “It could be widely disseminated at a very low cost. You could imagine a whole school district adopting the app,” says Pope. She expects that if the January test shows promise, GameTheory will take the game forward into the marketplace, and continue to update and improve it. “There’s definitely potential,” says Pope.

[Source:- Phys.org]

Software system labels coral reef images in record time

Computer scientists at the University of California San Diego have released a new version of a software system that processes images from the world’s coral reefs anywhere between 10 to 100 times faster than processing the data by hand.

This is possible because the new version of the system, dubbed CoralNet Beta, includes deep learning technology, which uses vast networks of artificial neurons to learn to interpret image content and to process data.

CoralNet Beta cuts down the time needed to go through a typical 1200-image diver survey of the ocean’s floor from 10 weeks to just one week—with the same amount of accuracy. Coral ecologists and government organizations, such as the National Oceanographic and Atmospheric Administration, also use CoralNet to automatically process images from autonomous underwater vehicles. The system allows researchers to label different types of coral and whether they’ve been bleached, different types of invertebrates, different types of algae—and more. In all, over 2200 labels are available on the site.

“This will allow researchers to better understand the changes and degradation happening in coral reefs,” said David Kriegman, a computer science professor at the Jacobs School of Engineering at UC San Diego and one of the project’s advisers.

The Beta version of the system runs on a deep neural network with more than 147 million neural connections. “We expect users to see a very significant improvement in automated annotation performance compared to the previous version, allowing more images to be annotated quicker—meaning more time for field deployment and higher-level data analysis,” said Oscar Beijbom, a UC San Diego Ph.D. alumnus and the project’s manager and founder of CoralNet.

He created CoralNet Alpha in 2012 to help label images gathered by oceanographers around the world. Since then, more than 500 users, from research groups, to nonprofits, to government organizations, have uploaded more than 350,000 survey images to the system. Researchers used CoralNet Alpha to label more than five million data points across these images using a tool to label random points within an image designed by UC San Diego alumnus Stephen Chen, the project’s lead developer.

“Over time, news of the site spread by word of mouth, and suddenly it was used all over the world,” said Beijbom.

Other updates in the Beta version include an improved user interface, web security and scalable hosting at Amazon Web Services.

[Source:- Phys.org]

China puts up Stop sign for Pokemon Go

China will not allow its mammoth mobile online population to play Pokemon Go or other augmented-reality games until it completes a review of potential security risks, a Chinese digital publishing group said.

The roadblock was put up amid concerns that such games contain “rather big social risks” including potential threats to consumer and traffic safety, and the security of “geographic information”, the China Audio-Video and Digital Publishing Association (CADPA) said this week.

The industry group said in a statement that it was informed of the move by China’s State Administration of Press, Publication, Radio, Film and Television (SAPPRFT).

It said SAPPRFT was conducting a security review of such games in the meantime.

“Before then, SAPPRFT will not accept requests to approve such games and has advised domestic game developers to be cautious when considering developing, introducing or operating such games,” the publishing association said.

Pokemon Go engages mobile users in a virtual chase for cartoon creatures appearing in their vicinity, as seen through their phone camera, but relies for many of its functions on Google Maps, which is blocked in China.

Beijing keeps tight control over surveying, mapping and geographic information.

China is a huge potential market for gamers, with 1.3 billion mobile users by the end of 2015.

Some Chinese companies are already getting into the act, with tech giants Alibaba and Tencent recently introducing augmented-reality games with a theme linked to the Chinese lunar new year holidays beginning in late January.

It was not immediately clear how the digital-publishing association’s announcement would effect those games.

[Source:- Phys.org]

Drones take off in plant ecological research

Image result for Drones take off in plant ecological research

Long-term, broad-scale ecological data are critical to plant research, but often impossible to collect on foot. Traditional data-collection methods can be time consuming or dangerous, and can compromise habitats that are sensitive to human impact. Micro-unmanned aerial vehicles (UAVs), or drones, eliminate these data-collection pitfalls by flying over landscapes to gather unobtrusive aerial image data.

A new review in a recent issue of Applications in Plant Sciences explores when and how to use drones in plant research. “The potential of drone technology in research may only be limited by our ability to envision novel applications,” comments Mitch Cruzan, lead author of the review and professor in the Department of Biology at Portland State University. Drones can amass vegetation data over seasons or years for monitoring habitat restoration efforts, monitoring rare and threatened plant populations, surveying agriculture, and measuring carbon storage. “This technology,” says Cruzan, “has the potential for the acquisition of large amounts of information with minimal effort and disruption of natural habitats.”

For some research questions, drone surveys could be the holy grail of ecological data. Drone-captured images can map individual species in the landscape depending on the uniqueness of the spectral light values created from plant leaf or flower colors. Drones can also be paired with 3D technology to measure plant height and size. Scientists can use these images to study plant health, phenology, and reproduction, to track disease, and to survey human-mediated habitat disturbances.

Researchers can fly small drones along set transects over study areas of up to 40 hectares in size. An internal GPS system allows drones to hover over pinpointed locations and altitudes to collect repeatable, high-resolution images. Cruzan and colleagues warn researchers of “shadow gaps” when collecting data. Taller vegetation can obscure shorter vegetation, hiding them from view in aerial photographs. Thus, overlapping images are required to get the right angles to capture a full view of the landscape.

The review lists additional drone and operator requirements and desired features, including video feeds, camera stabilization, wide-angle lenses for data collection over larger areas, and must-have metadata on the drone’s altitude, speed, and elevation of every captured image.

After data collection, georeferenced images are stitched together into a digital surface model (DSM) to be analyzed. GIS and programming software classify vegetation types, landscape features, and even individual species in the DSMs using manual or automated, machine-learning techniques.

To test the effectiveness of drones, Cruzan and colleagues applied drone technology to a landscape genetics study of the Whetstone Savanna Preserve in southern Oregon, USA. “Our goal is to understand how landscape features affect pollen and seed dispersal for plant species associated with different dispersal vectors,” says Cruzan. They flew drones over vernal pools, which are threatened, seasonal wetlands. They analyzed the drone images to identify how landscape features mediate gene flow and plant dispersal in these patchy habitats. Mapping these habitats manually would have taken hundreds of hours and compromised these ecologically sensitive areas.

Before drones, the main option for aerial imaging data was light detection and ranging (LiDAR). LiDAR uses remote sensing technology to capture aerial images. However, LiDAR is expensive, requires highly specialized equipment and flyovers, and is most frequently used to capture data from a single point in time. “LIDAR surveys are conducted at a much higher elevation, so they are not useful for the more subtle differences in vegetation elevation that higher-resolution, low-elevation drone surveys can provide,” explains Cruzan.

Some limitations impact the application of new drone technology. Although purchasing a robotic drone is more affordable than alternative aerial imaging technologies, initial investments can exceed US$1,500. Also, national flight regulations still limit drone applications in some countries because of changing licensing regulations and restricted flight elevations and flyovers near or on private lands. Also, if researchers are studying plant species that cannot be identified in aerial images using spectral light values, data collection on foot is required.

Despite limitations, flexibility is the biggest advantage to robotic drone research, says Cruzan. If the scale and questions of the study are ripe for taking advantage of drone technology, then “using a broad range of imaging technologies and analysis methods will improve our ability to detect, discriminate, and quantify different features of the biotic and abiotic environment.” As drone research increases, access to open-source analytical software programs and better equipment hardware will help researchers harness the advantages of drone technology in plant ecological research.

 

[Source:- SD]

Transforming, self-learning software could help save the planet

Image result for Transforming, self-learning software could help save the planet

Artificially intelligent computer software that can learn, adapt and rebuild itself in real-time could help combat climate change.

Researchers at Lancaster University’s Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do.

The system — called REx — is being developed with vast energy-hungry data centres in mind. By being able to rapidly adjust to optimally deal with a huge multitude of tasks, servers controlled by REx would need to do less processing, therefore consuming less energy.

REx works using ‘micro-variation’ — where a large library of building blocks of software components (such as memory caches, and different forms of search and sort algorithms) can be selected and assembled automatically in response to the task at hand.

“Everything is learned by the live system, assembling the required components and continually assessing their effectiveness in the situations to which the system is subjected,” said Dr Barry Porter, lecturer at Lancaster University’s School of Computing and Communications. “Each component is sufficiently small that it is easy to create natural behavioural variation. By autonomously assembling systems from these micro-variations we then see REx create software designs that are automatically formed to deal with their task.

“As we use connected devices on a more frequent basis, and as we move into the era of the Internet of Things, the volume of data that needs to be processed and distributed is rapidly growing. This is causing a significant demand for energy through millions of servers at data centres. An automated system like REx, able to find the best performance in any conditions, could offer a way to significantly reduce this energy demand,” Dr Porter added.

In addition, as modern software systems are increasingly complex — consisting of millions of lines of code — they need to be maintained by large teams of software developers at significant cost. It is broadly acknowledged that this level of complexity and management is unsustainable. As well as saving energy in data centres, self-assembling software models could also have significant advantages by improving our ability to develop and maintain increasingly complex software systems for a wide range of domains, including operating systems and Internet infrastructure.

REx is built using three complementary layers. At the base level a novel component-based programming language called Dana enables the system to find, select and rapidly adapt the building blocks of software. A perception, assembly and learning framework (PAL) then configures and perceives the behaviour of the selected components, and an online learning process learns the best software compositions in real-time by taking advantage of statistical learning methods known as ‘linear bandit models’.

The work is presented in the paper ‘REx: A Development Platform and Online Learning Approach for Runtime Emergent Software Systems’ at the conference ‘OSDI ’16 12th USENIX Symposium on Operating Systems Design and Implementation’. The research has been partially supported by the Engineering and Physical Sciences Research Council (EPSRC), and also a PhD scholarship of Brazil.

The next steps of this research will look at the automated creation of new software components for use by these systems and will also strive to increase automation even further to make software systems an active part of their own development teams, providing live feedback and suggestions to human programmers.

[Source:- SD]