Squirrel ‘threat’ to critical infrastructure

Grey squirrel

The real threat to global critical infrastructure is not enemy states or organisations but squirrels, according to one security expert.

Cris Thomas has been tracking power cuts caused by animals since 2013.

Squirrels, birds, rats and snakes have been responsible for more than 1,700 power cuts affecting nearly 5 million people, he told a security conference.

He explained that by tracking these issues, he was seeking to dispel the hype around cyber-attacks.

His Cyber Squirrel 1 project was set up to counteract what he called the “ludicrousness of cyber-war claims by people at high levels in government and industry”, he told the audience at the Shmoocon security conference in Washington.

Squirrels topped the list with 879 “attacks”, followed by:

  • birds – 434
  • snakes – 83
  • raccoons – 72
  • rats – 36
  • martens – 22
  • frogs – three
  • He concludes that the damage done by real cyber-attacks – Stuxnet’s destruction of Iranian uranium enrichment centrifuges and disruption to Ukrainian power plants being the most high profile – was tiny compared to the “cyber-threat” posed by animals.

    Most of the animal “attacks” were on power cables but Mr Thomas also discovered that jellyfish had shut down a Swedish nuclear power plant in 2013, by clogging the pipes that carry cool water to the turbines.

    He also discovered that there have been eight deaths attributed to animal attacks on infrastructure, including six caused by squirrels downing power lines that then struck people on the ground.

    Mr Thomas – better known as SpaceRogue – set up Cyber Squirrel 1 as a Twitter feed in March 2013 and initially collected information from Google alerts.

  • It has since evolved into a much larger project – collecting information from search engines and other web sources.

    Mr Thomas only collected reports compiled in the English language and admitted that he was probably only capturing “a fraction” of animal-related power cuts worldwide.

    “The major difference between natural events, be they geological, meteorological or furry, is that cyber-attacks are deliberate orchestrated by humans,” said Luis Corrons, technical director of security firm PandaLabs.

    “While natural disasters are taken into account when critical infrastructure facilities are built, that’s not the case with computers. Most critical facilities were never designed to connect to the rest of the world, so the kind of security they implemented was taking care of the physical world surrounding them.

    “The number of potential attackers is growing, the number of potential targets is also going up. So we all need to reinforce our defences to the maximum – and also worry about squirrels.”

 
[Source:- BBC]

Drones take off in plant ecological research

Image result for Drones take off in plant ecological research

Long-term, broad-scale ecological data are critical to plant research, but often impossible to collect on foot. Traditional data-collection methods can be time consuming or dangerous, and can compromise habitats that are sensitive to human impact. Micro-unmanned aerial vehicles (UAVs), or drones, eliminate these data-collection pitfalls by flying over landscapes to gather unobtrusive aerial image data.

A new review in a recent issue of Applications in Plant Sciences explores when and how to use drones in plant research. “The potential of drone technology in research may only be limited by our ability to envision novel applications,” comments Mitch Cruzan, lead author of the review and professor in the Department of Biology at Portland State University. Drones can amass vegetation data over seasons or years for monitoring habitat restoration efforts, monitoring rare and threatened plant populations, surveying agriculture, and measuring carbon storage. “This technology,” says Cruzan, “has the potential for the acquisition of large amounts of information with minimal effort and disruption of natural habitats.”

For some research questions, drone surveys could be the holy grail of ecological data. Drone-captured images can map individual species in the landscape depending on the uniqueness of the spectral light values created from plant leaf or flower colors. Drones can also be paired with 3D technology to measure plant height and size. Scientists can use these images to study plant health, phenology, and reproduction, to track disease, and to survey human-mediated habitat disturbances.

Researchers can fly small drones along set transects over study areas of up to 40 hectares in size. An internal GPS system allows drones to hover over pinpointed locations and altitudes to collect repeatable, high-resolution images. Cruzan and colleagues warn researchers of “shadow gaps” when collecting data. Taller vegetation can obscure shorter vegetation, hiding them from view in aerial photographs. Thus, overlapping images are required to get the right angles to capture a full view of the landscape.

The review lists additional drone and operator requirements and desired features, including video feeds, camera stabilization, wide-angle lenses for data collection over larger areas, and must-have metadata on the drone’s altitude, speed, and elevation of every captured image.

After data collection, georeferenced images are stitched together into a digital surface model (DSM) to be analyzed. GIS and programming software classify vegetation types, landscape features, and even individual species in the DSMs using manual or automated, machine-learning techniques.

To test the effectiveness of drones, Cruzan and colleagues applied drone technology to a landscape genetics study of the Whetstone Savanna Preserve in southern Oregon, USA. “Our goal is to understand how landscape features affect pollen and seed dispersal for plant species associated with different dispersal vectors,” says Cruzan. They flew drones over vernal pools, which are threatened, seasonal wetlands. They analyzed the drone images to identify how landscape features mediate gene flow and plant dispersal in these patchy habitats. Mapping these habitats manually would have taken hundreds of hours and compromised these ecologically sensitive areas.

Before drones, the main option for aerial imaging data was light detection and ranging (LiDAR). LiDAR uses remote sensing technology to capture aerial images. However, LiDAR is expensive, requires highly specialized equipment and flyovers, and is most frequently used to capture data from a single point in time. “LIDAR surveys are conducted at a much higher elevation, so they are not useful for the more subtle differences in vegetation elevation that higher-resolution, low-elevation drone surveys can provide,” explains Cruzan.

Some limitations impact the application of new drone technology. Although purchasing a robotic drone is more affordable than alternative aerial imaging technologies, initial investments can exceed US$1,500. Also, national flight regulations still limit drone applications in some countries because of changing licensing regulations and restricted flight elevations and flyovers near or on private lands. Also, if researchers are studying plant species that cannot be identified in aerial images using spectral light values, data collection on foot is required.

Despite limitations, flexibility is the biggest advantage to robotic drone research, says Cruzan. If the scale and questions of the study are ripe for taking advantage of drone technology, then “using a broad range of imaging technologies and analysis methods will improve our ability to detect, discriminate, and quantify different features of the biotic and abiotic environment.” As drone research increases, access to open-source analytical software programs and better equipment hardware will help researchers harness the advantages of drone technology in plant ecological research.

 

[Source:- SD]

MySQL zero-day exploit puts some servers at risk of hacking

A zero-day exploit could be used to hack MySQL servers.

A publicly disclosed vulnerability in the MySQL database could allow attackers to completely compromise some servers.

The vulnerability affects “all MySQL servers in default configuration in all version branches (5.7, 5.6, and 5.5) including the latest versions,” as well as the MySQL-derived databases MariaDB and Percona DB, according to Dawid Golunski, the researcher who found it.

The flaw, tracked as CVE-2016-6662, can be exploited to modify the MySQL configuration file (my.cnf) and cause an attacker-controlled library to be executed with root privileges if the MySQL process is started with the mysqld_safe wrapper script.

The exploit can be executed if the attacker has an authenticated connection to the MySQL service, which is common in shared hosting environments, or through an SQL injection flaw, a common type of vulnerability in websites.

Golunski reported the vulnerability to the developers of all three affected database servers, but only MariaDB and Percona DB received patches so far. Oracle, which develops MySQL, was informed on Jul. 29, according to the researcher, but has yet to fix the flaw.

Oracle releases security updates based on a quarterly schedule and the next one is expected in October. However, since the MariaDB and Percona patches are public since the end of August, the researcher decided to release details about the vulnerability Monday so that MySQL admins can take actions to protect their servers.

Golunski’s advisory contains a limited proof-of-concept exploit, but some parts have been intentionally left out to prevent widespread abuse. The researcher also reported a second vulnerability to Oracle, CVE-2016-6663, that could further simplify the attack, but he hasn’t published details about it yet.

The disclosure of CVE-2016-6662 was met with some criticism on specialized discussion forums, where some users argued that it’s actually a privilege escalation vulnerability and not a remote code execution one as described, because an attacker would need some level of access to the database.

“As temporary mitigations, users should ensure that no mysql config files are owned by mysql user, and create root-owned dummy my.cnf files that are not in use,” Golunski said in his advisory. “These are by no means a complete solution and users should apply official vendor patches as soon as they become available.”

Oracle didn’t immediately respond to a request for comments on the vulnerability.

 

 

[Source:- IW]

Apache Beam unifies batch and streaming for big data

Apache Beam unifies batch and streaming for big data

Apache Beam, a unified programming model for both batch and streaming data, has graduated from the Apache Incubator to become a top-level Apache project.

Aside from becoming another full-fledged widget in the ever-expanding Apache tool belt of big-data processing software, Beam addresses ease of use and dev-friendly abstraction, rather than simply offering raw speed or a wider array of included processing algorithms.

Beam us up!

Beam provides a single programming model for creating batch and stream processing jobs (the name is a hybrid of “batch” and “stream”), and it offers a layer of abstraction for dispatching to various engines used to run the jobs. The project originated at Google, where it’s currently a service called GCD (Google Cloud Dataflow). Beam uses the same API as GCD, and it can use GCD as an execution engine, along with Apache Spark, Apache Flink (a stream processing engine with a highly memory-efficient design), and now Apache Apex (another stream engine for working closely with Hadoop deployments).

The Beam model involves five components: the pipeline (the pathway for data through the program); the “PCollections,” or data streams themselves; the transforms, for processing data; the sources and sinks, where data is fetched and eventually sent; and the “runners,” or components that allow the whole thing to be executed on an engine.

Apache says it separated concerns in this fashion so that Beam can “easily and intuitively express data processing pipelines for everything from simple batch-based data ingestion to complex event-time-based stream processing.” This is in line with reworking tools like Apache Spark to support stream and batch processing within the same product and with similar programming models. In theory, it’s one fewer concept for prospective developers to wrap their head around, but that presumes Beam is used in lieu of Spark or other frameworks, when it’s more likely it’ll be used — at first — to augment them.

Hands off

One possible drawback to Beam’s approach is that while the layers of abstraction in the product make operations easier, they also put the developer at a distance from the underlying layers. A good case in point: Beam’s current level of integration with Apache Spark; the Spark runner doesn’t yet use Spark’s more recent DataFrames system, and thus may not take advantage of the optimizations those can provide. But this isn’t a conceptual flaw, it’s an issue with the implementation, which can be addressed in time.

The big payoff of using Beam, as noted by Ian Pointer in his discussion of Beam in early 2016, is that it makes migrations between processing systems less of a headache. Likewise, Apache says Beam “cleanly [separates] the user’s processing logic from details of the underlying engine.”

Separation of concern and ease of migration will be good to have if the ongoing rivalries, and competitions between the various big data processing engines continues. Granted, Apache Spark has emerged as one of the undisputed champs of the field and become a de facto standard choice. But there’s always room for improvement or an entirely new streaming or processing paradigm. Beam is less about offering a specific alternative than about providing developers and data-wranglers with more breadth of choice between them.

 

 

[Source:- Javaworld]

Porn videos streamed ‘via YouTube loophole’

YouTube logo

Adult video websites appear to be exploiting a YouTube loophole to host explicit material on the platform.

News site TorrentFreak found some sites had uploaded videos that did not show up on YouTube, but could be viewed on third-party websites.

The exploit allows a website to host its video library for free, using Google’s server space and bandwidth.

YouTube told the BBC its policies “prohibit sexually explicit content like pornography”.

Videos can be uploaded to YouTube under a “private” setting that prevents them from appearing publicly on the website or in search results. This setting also disables the embed function that usually lets videos be posted on other websites.

However, TorrentFreak reported that some websites had found a way to play secretly uploaded videos on their own external services, by streaming the raw data from googlevideo.com – a domain operated by Google.

The news site said it was not clear exactly how the websites were achieving this.

Hosting videos on YouTube secretly would let an adult video site keep its costs low, while earning money selling access to its videos.

One California-based adult film producer suggested that the loophole was also being used to host pirated adult content.

“Copyright infringers take advantage of a private-video-share setting,” Dreamroom Productions told TorrentFreak.

“They upload and store videos, and freely use them on third party websites to earn profits.”

The company said Google did take down infringing copies of its content when notified, but added that the process sometimes took up to three weeks.

“YouTube should be aware of this. They are allowing the situation to continue by not plugging this hole,” the firm said.

A spokeswoman for YouTube said: “We have teams around the world that review flagged content, regardless of whether it is private, public or unlisted. Content that violates our policies is quickly removed.”

 

[Source:- BBC]

 

Transforming, self-learning software could help save the planet

Image result for Transforming, self-learning software could help save the planet

Artificially intelligent computer software that can learn, adapt and rebuild itself in real-time could help combat climate change.

Researchers at Lancaster University’s Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do.

The system — called REx — is being developed with vast energy-hungry data centres in mind. By being able to rapidly adjust to optimally deal with a huge multitude of tasks, servers controlled by REx would need to do less processing, therefore consuming less energy.

REx works using ‘micro-variation’ — where a large library of building blocks of software components (such as memory caches, and different forms of search and sort algorithms) can be selected and assembled automatically in response to the task at hand.

“Everything is learned by the live system, assembling the required components and continually assessing their effectiveness in the situations to which the system is subjected,” said Dr Barry Porter, lecturer at Lancaster University’s School of Computing and Communications. “Each component is sufficiently small that it is easy to create natural behavioural variation. By autonomously assembling systems from these micro-variations we then see REx create software designs that are automatically formed to deal with their task.

“As we use connected devices on a more frequent basis, and as we move into the era of the Internet of Things, the volume of data that needs to be processed and distributed is rapidly growing. This is causing a significant demand for energy through millions of servers at data centres. An automated system like REx, able to find the best performance in any conditions, could offer a way to significantly reduce this energy demand,” Dr Porter added.

In addition, as modern software systems are increasingly complex — consisting of millions of lines of code — they need to be maintained by large teams of software developers at significant cost. It is broadly acknowledged that this level of complexity and management is unsustainable. As well as saving energy in data centres, self-assembling software models could also have significant advantages by improving our ability to develop and maintain increasingly complex software systems for a wide range of domains, including operating systems and Internet infrastructure.

REx is built using three complementary layers. At the base level a novel component-based programming language called Dana enables the system to find, select and rapidly adapt the building blocks of software. A perception, assembly and learning framework (PAL) then configures and perceives the behaviour of the selected components, and an online learning process learns the best software compositions in real-time by taking advantage of statistical learning methods known as ‘linear bandit models’.

The work is presented in the paper ‘REx: A Development Platform and Online Learning Approach for Runtime Emergent Software Systems’ at the conference ‘OSDI ’16 12th USENIX Symposium on Operating Systems Design and Implementation’. The research has been partially supported by the Engineering and Physical Sciences Research Council (EPSRC), and also a PhD scholarship of Brazil.

The next steps of this research will look at the automated creation of new software components for use by these systems and will also strive to increase automation even further to make software systems an active part of their own development teams, providing live feedback and suggestions to human programmers.

[Source:- SD]

Snowflake now offers data warehousing to the masses

Snowflake now offers data warehousing to the masses

Snowflake, the cloud-based data warehouse solution co-founded by Microsoft alumnus Bob Muglia, is lowering storage prices and adding a self-service option, meaning prospective customers can open an account with nothing more than a credit card.

These changes also raise an intriguing question: How long can a service like Snowflake expect to reside on Amazon, which itself offers services that are more or less in direct competition — and where the raw cost of storage undercuts Snowflake’s own pricing for same?

Open to the public

The self-service option, called Snowflake On Demand, is a change from Snowflake’s original sales model. Rather than calling a sales representative to set up an account, Snowflake users can now provision services themselves with no more effort than would be needed to spin up an AWS EC2 instance.

In a phone interview, Muglia discussed how the reason for only just now transitioning to this model was more technical than anything else. Before self-service could be offered, Snowflake had to put protections into place to ensure that both the service itself and its customers could be protected from everything from malice (denial-of-service attacks) to incompetence (honest customers submitting massively malformed queries).

“We wanted to make sure we had appropriately protected the system,” Muglia said, “before we opened it up to anyone, anywhere.”

This effort was further complicated by Snowflake’s relative lack of hard usage limits, which Muglia characterized as being one of its major standout features. “There is no limit to the number of tables you can create,” Muglia said, but he further pointed out that Snowflake has to strike a balance between what it can offer any one customer and protecting the integrity of the service as a whole.

“We get some crazy SQL queries coming in our direction,” Muglia said, “and regardless of what comes in, we need to continue to perform appropriately for that customer as well as other customers. We see SQL queries that are a megabyte in size — the query statements [themselves] are a megabyte in size.” (Many such queries are poorly formed, auto-generated SQL, Muglia claimed.)

Fewer costs, more competition

The other major change is a reduction in storage pricing for the service — $30/TB/month for capacity storage, $50/TB/month for on-demand storage, and uncompressed storage at $10/TB/month.

It’s enough of a reduction in price that Snowflake will be unable to rely on storage costs as a revenue source, since those prices barely pay for the use of Amazon’s services as a storage provider. But Muglia is confident Snowflake is profitable enough overall that such a move won’t impact the company’s bottom line.

“We did the data modeling on this,” said Muglia, “and our margins were always lower on storage than on compute running queries.”

According to the studies Snowflake performed, “when customers put more data into Snowflake, they run more queries…. In almost every scenario you can imagine, they were very much revenue-positive and gross-margin neutral, because people run more queries.”

The long-term implications for Snowflake continuing to reside on Amazon aren’t clear yet, especially since Amazon might well be able to undercut Snowflake by directly offering competitive services.

Muglia, though, is confident that Snowflake’s offering is singular enough to stave off competition for a good long time, and is ready to change things up if need be. “We always look into the possibility of moving to other cloud infrastructures,” Muglia said, “although we don’t have plans to do it right now.”

He also noted that Snowflake competes with Amazon and Redshift right now, but “we have a very different shape of product relative to Redshift…. Snowflake is storing multiple petabytes of data and is able to run hundreds of simultaneous concurrent queries. Redshift can’t do that; no other product can do that. It’s that differentiation that allows to effective compete with Amazon, and for that matter Google and Microsoft and Oracle and Teradata.”

 

 

[Source:- IW]

New framework uses Kubernetes to deliver serverless app architecture

New framework uses Kubernetes to deliver serverless app architecture

A new framework built atop Kubernetes is the latest project to offer serverless or AWS Lambda-style application architecture on your own hardware or in a Kubernetes-as-a-service offering.

The Fission framework keeps the details about Docker and Kubernetes away from developers, allowing them to concentrate on the software rather than the infrastructure. It’s another example of Kubernetes becoming a foundational technology.

Some assembly, but little container knowledge, required

Written in Go and created by managed-infrastructure provider Platform9, Fission works in conjunction with any Kubernetes cluster. Developers write functions that use Fission’s API, much the same as they would for AWS Lambda. Each function runs in what’s called an environment, essentially a package for the language runtime. Triggers are used to map functions to events; HTTP routes are one common trigger.

Fission lets users effortlessly leverage Kubernetes and Docker to run applications on it. Developers don’t need to know intimate details about Docker or Kubernetes simply to ensure the application can run well. Likewise, developers don’t have to build app containers, but they can always use a prebuilt container if needed, especially if the app is larger and more complex than a single function can encapsulate.

Fission’s design allows applications to be highly responsive to triggers. When launched, Fission creates a pool of “prewarmed” containers ready to receive functions. According to Fission’s developers, this means an average of 100 milliseconds for the “cold start” of an application, although that figure will likely be dependent on the deployment and the hardware.

We’re just getting warmed up!

A few good clues indicate what Fission’s developers want to do with the project in the future. For one, the plan includes being as language- and runtime-agnostic as possible. Right now the only environments (read: runtimes) that ship with Fission are for Node.js and Python, but new ones can be added as needed, and existing ones can be modified freely. “An environment is essentially just a container with a web server and dynamic loader,” explains Fission’s documentation.

Another currently underdeveloped area that will be expanded in future releases: The variety of triggers available to Fission. Right now, HTTP routes are the only trigger type that can be used, but plans are on the table to add other triggers, such as Kubernetes events.

 

 

[Source:- Javaworld]

Apple App Store prices rise in UK, India and Turkey

iPhone

Apple is to put up the price it charges for apps in the UK, India and Turkey.

UK costs will numerically match those of the US, meaning that a program that costs $0.99 will now be 99p.

That represents a 25% rise over the previous currency conversion, which was 79p.

“Price tiers on the App Store are set internationally on the basis of several factors, including currency exchange rates, business practices, taxes, and the cost of doing business,” it said.

“These factors vary from region to region and over time.”

The rise will also affect in-app purchases but not subscription charges.

A spokeswoman for Google was unable to comment about whether it had plans to alter prices on its Play store for Android apps.

Publishers’ choice

Apple had already adjusted the UK prices of its iPhones and iPads in September and then its Mac computers in October by a similar degree.

Other tech firms to have announced price rises in the country in the months following the Brexit vote – which has been linked to a fall in sterling’s value – include Microsoft, Dell, Tesla and HP.

To mitigate the impact of the latest increase, Apple is introducing new lower-price tiers.

Publishers will be able to charge users 49p or 79p for purchases but will have to re-price their products to do so.

“I don’t think many publishers will respond to that change,” commented Ben Dodson, an app consultant and developer of Music Tracker among other software.

“It’s just throwing money away and there’s no reason to give people in the UK a discount.

“I won’t be discounting my own apps.”

At present, $1 trades for 82p.

However, the price quoted by Apple in the UK version of its store includes the 20% VAT sales tax. In the US, state sales taxes are not included in advertised prices but are added at the point of sale.

“It was certainly inevitable that Apple would change the price point for apps in the App Store to reflect currency changes,” commented Ian Fogg from the IHS Technology consultancy.

“But this is a normal part of the way the store works because it does not have dynamically changing prices that would change gradually.”

The cost of a $0.99 app will become 80 rupees in India, representing a 33% rise from the previous price of 60 rupees.

In Turkey it will change from 2.69 to 3.49 lira, which is a gain of 30%.

The news site 9to5Mac was first to report the development.

It said the change would occur over the next seven days.

Apple has also altered the cost of apps in Romania and Russia to take account of local changes to VAT made at the start of the year.

 

 

[Source:- BBC]

Remote-controlled drone helps in designing future wireless networks

Image result for Remote-controlled drone helps in designing future wireless networks

Aerial photographs and photogrammetry together provide an accurate 3D model, which improves the prediction of the propagation of radio waves at millimetre-wave frequencies.

The development of mobile devices has set increasingly high requirements for wireless networks and the emission of radio frequencies. Researcher Vasilii Semkin together with a research group at Aalto University and Tampere University of Technology has recently tested in their research work how aerial photographs taken using a so-called drone could be used in designing radio links.

By using both the aerial photographs taken by the drone and photogrammetry software, they were able to create highly detailed 3D models of urban environments. These models can be used in designing radio links. Photogrammetry is a technique where 3D objects can be formed from two or more photographs.

‘The measurements and simulations we performed in urban environments show that highly accurate 3D models can be beneficial for network planning at millimetre-wave frequencies’, Semkin says.

Towards a more cost-efficient design process

The researchers compared the simple modelling technique that is currently popular to their photogrammetry-based modelling technique.

‘With the technique used by us, the resulting 3D model of the environment is much more detailed, and the technique also makes it possible to carry out the design process in a more cost-efficient way. It is then easier for designers to decide which objects in the environment to be taken into account, and where the base stations should be placed to get the optimum coverage’, Semkin explains.

 

[Source:- SD]