How to print from Edge for Windows 10

Image result for How to print from Edge for Windows 10b

How do I print web pages in Edge?

Microsoft Edge features all of the standard print tools for creating copies of web pages; those important stories and official forms can be physically printed on your printer, or they can be converted to PDF for further editing. Here’s everything you need to know about printing from Edge for Windows 10.

A look at Edge’s print settings

For anyone unfamiliar with printing from the web, here’s a look at the print settings in Edge.

  • Printer: Choose which printer you’d like to use.
  • Orientation: Choose from Portrait or Landscape.
  • Copies: Choose how many full copies of the printing job you want to be printed.
  • Pages: Choose from all pages, the current page shown in the preview, or page range. You can specify the range yourself by typing, for example, 5-7.
  • Scale: Change how large you’d like text and images to appear.
  • Margins: Set how wide or narrow you’d like the margins to be on printed pages.
  • Headers and footers: Toggle on and off. When on, the article’s title, the website, and the page number will be displayed at the top of the page, while the URL and date will be displayed at the bottom of the page.

For more settings, click More settings near the bottom of the print window.

  • Collation: Choose from collated or uncollated. Collated print jobs involving multiple copies will print in sequence, making it easier for physical distribution and binding. Uncollated print jobs with multiple copies will print out of sequence, i.e. four copies of page one, then four copies of page two, etc.
  • Pages per sheet: Choose how many pages you want to see on each sheet of paper. Scaling will occur.
  • Paper size: Choose the paper size you’re currently using in your printer.
  • Paper type: Choose the type of paper you’re currently using in your printer.
  • Paper tray: Choose which tray on your printer to use.

  • How to print a webpage

    First things first; here’s how to print a page as-is in Edge.

    1. Launch Edge from your Start menu, taskbar, or desktop.
    2. Navigate to a webpage you want to print.
    3. Click the More button in the top-right corner of the window. It looks like •••

 

[Source:- Windowscentral]

This university has no teachers, syllabus or fees

It’s 9.30am on a grey Thursday morning in May, and long banks of iMacs stand idle in a former government building on Boulevard Bessières in north Paris. The morning lack of activity, explains Xavier Niel, a French billionaire who is leading a tour of his three-year-old experimental university, isn’t a concern; rush hour is 2 or 3am.

“You’d see 300 or 400 students here at night,” Niel says. “We’re open 24 hours – the French president was here taking selfies at midnight. And you’ll notice that there are no teachers – this is a project-based school. You get no diploma.”

Niel, who made his fortune by taking on France’s telco establishment with his Free ISP and mobile businesses, declared in 2013 that “the education system doesn’t work”. So he decided to reinvent it, by funding an ambitious merit-based coding school without teachers, without a syllabus, without entrance requirements and without fees.

The school, called École 42 (the answer to the question of “life, the Universe and everything” in The Hitchhiker’s Guide to the Galaxy), is funded, in Niel’s words, “by my credit card”: ˇ20 million (£17m) for launch costs, and around ˇ7m a year in running costs for the first decade. “After that,” he says, pointing to three American students, “we hope one of you guys will be the next Zuckerberg.”

Peter Thiel, Jack Dorsey and Tony Fadell have come here to marvel at how Niel’s school has challenged existing notions of higher education. Snapchat’s Evan Spiegel is among the converts: “You feel you’re walking into a school from the future,” he declared. “It’s a transformative way to learn.”

So, to the acclaim of his Silicon Valley friends, Niel in autumn 2016 opened a second branch in Fremont, in California, with a $100m (£76m) commitment. It’s all part of his mission to make talent and merit, not means, the gateway to a quality tech education.

And it all comes down to talent-spotting via a merit-based game. “We have 80,000 applicants a year who play an online game, and 25,000 finish,” Niel explains. “We take the 3,000 best and ask them to come to the school for a month – that’s 450 hours of 15-hour days, including Saturday and Sunday. After five or six days, a third of them leave. And then we take the 1,000 best.”

The survivors – 80 to 90 per cent of whom are French, but which also includes many Americans – win a free education, help in finding accommodation (Niel is building 900 flats), loan guarantees of €15,000 if needed, and access to high-quality internships. “Forty per cent don’t have a Baccalaureate, and half the students in this school are from poor families and wouldn’t be able to afford it,” Niel says. An American woman, with a biology degree from Yale, smiles and says: “We’re the lucky ones.” Niel counters: “There’s no luck.”

The project-based curriculum consists of 21 modules – or, as Niel calls them, “game levels” – designed by six staff in an upstairs enclave called “the cluster”. Apart from a five-minute instructional video and PDF, students are left to learn in groups. After a month, they should be able to code in C; they’re challenged to build Tetrisand Sudoku from scratch using their new skills. They then move at their own pace: the fastest student finished school after 18 months; others will take five years.

Game dynamics are everywhere: to get projects corrected, students must spend “correction points” – which they earn by correcting someone else’s project. If there’s a disciplinary breach, they have to spin a wheel to learn their punishment: “Take orders at the coffee machine”, or “Clean the windows with a toothbrush”. Good behaviour earns “wallet points” which can be spent.

There are still some bugs to iron out: fewer than ten per cent of students are women, which 42 is trying to change by inviting secondary-school girls to spend holiday time at the college. Graduate salaries, Niel says, are typically €42,000-€45,000 in the first year, “yet with better coding levels than US graduates earning $140,000”.

École 42 is far from the only ambitious ed-tech experiment being led by a bold tech 
entrepreneur. California- and Hong Kong-based Age of Learning raised $150 million last May; China’s 17zuoye recently raised $100 million; and Udemy raised $113m. Then there are Udacity, Coursera, iTutorGroup, Pluralsight, the hyper-selective Minerva Project university… indeed, AngelList documents a staggering 11,812 education startups.

Kevin Carey, author of The End of College: Creating the Future of Learning and the University of Everywhere, sees a global $4.6 trillion market being disrupted. That may not be a bad thing: under the current university system, student debt in the US alone is now estimated at exceeding $1.2 trillion.

But Niel, who learned to code at 16 on a Sinclair ZX81 and dropped out of school to work on Minitel phone-connected monitors, doesn’t see himself as taking on the establishment. “France is amazing,” he tells WIRED. “I’ve helped finance a thousand startups, and I like to have a good relationship with the French establishment. I like to help entrepreneurs.”

 

 

[Source:- Wired]

Toddler robots help solve how children learn

Children learn new words using the same method as robots, according to psychologists.

This suggests that early learning is based not on conscious thought but on an automatic ability to associate objects which enables babies to quickly make sense of their environment.

Dr Katie Twomey from Lancaster University, with Dr Jessica Horst from Sussex University, Dr Anthony Morse and Professor Angelo Cangelosi from Plymouth wanted to find out how young children learn new words for the first time. They programmed a humanoid robot called iCub designed to have similar proportions to a three year old child, using simple software which enabled the robot to hear words through a microphone and see with a camera. They trained it to point at new objects to identify them in order to solve the mystery of how young children learn new words.

Dr Twomey said: “We know that two-year-old children can work out the meaning of a new word based on words they already know. That is, our toddler can work out that the new word “giraffe” refers to a new toy, when they can also see two others, called “duck” and “rabbit.” ”

It is thought that toddlers achieve this through a strategy known as “mutual exclusivity” where they use a process of elimination to work out that because the brown toy is called “rabbit,” and the yellow toy is called “duck,” then the orange toy must be “giraffe.”

What the researchers found is that the robot learned in exactly the same way when shown several familiar toys and one brand new toy.

Dr Twomey said: “This new study shows that mutual exclusivity behaviour can be achieved with a very simple “brain” that just learns associations between words and objects. In fact, intelligent as iCub seems, it actually can’t say to itself “I know that the brown toy is a rabbit, and I know that that the yellow toy is a duck, so this new toy must be giraffe,” because its software is too simple.

“This suggests that at least some aspects of early learning are based on an astonishingly powerful association making ability which allows babies and toddlers to rapidly absorb information from the very complicated learning environment.”

 

[Source:- SD]

CrateDB packs NoSQL flexibility, SQL familiarity

CrateDB packs NoSQL flexibility, SQL familiarity

CrateDB, an open source, clustered database designed for missions like fast text search and analytics, released its first full 1.0 version last week after three years in development.

It’s built upon several existing open source technologies — Elasticsearch and Lucene, for instance — but no direct knowledge of them is needed to deploy it, as CrateDB offers more than a repackaging of those products.

The database caught the attention of InfoWorld’s Peter Wayner back in 2015 because it promised “a search engine like [Apache] Lucene [and ‘its larger, scalable, and distributed cousin Elasticsearch’], but with the structure and querying ease of SQL.”

The idea is to provide more than a full-text search system. CrateDB’s use cases include big data analytics and scalable aggregations across large data sets. It allows querying via standard ANSI SQL, but it uses a distributed, horizontally scalable architecture, so that any number of nodes can be spun up and run side by side with minimal work.

CrateDB gets two major advantages from the NoSQL side. One is support for unstructured data via JSON documents and BLOB storage, with JSON data queryable through SQL as well. Another is support for high-speed writing, to make the database a suitable target for high-speed data ingestion a la Hadoop.

But CrateDB’s biggest draw may be the setup process and the overall level of get-in-and-go usability. The only prerequisite is Java 8, or you can use Docker to run a provided container image. Nodes automatically discover each other as long as they’re on a network that supports multicast. The web UI can bootstrap a cluster with sample data (courtesy of Twitter), and the command-line shell uses conventional SQL syntax for inserting and querying data. Also included is support for PostgreSQL’s wire protocol, although any actual SQL commands sent through it need to adhere to CrateDB’s implementation of SQL.

CrateDB’s one of a flood of recent database products that all address specific issues that have sprung up: scalability, resiliency, mixing modalities (NoSQL vs. SQL, document vs. graph), high-speed writes, and so on. The philosophy behind such products generally runs like this: Existing solutions are too old, hidebound, or legacy-oriented to solve current and future problems, so we need a clean slate. The trick will be to see whether the benefits of the clean slate outweigh the difficulties of moving to it — hence, CrateDB’s emphasis on usability and quick starts.

 

 

[Source:- Infoworld]

IBM: Next 5 years AI, IoT and nanotech will literally change the way we see the world

artificial intelligence AI machine learning brain circuit

Perhaps the coolest thing about IBM’s 9th “Five Innovations that will Help Change our Lives within Five Years” predictions is that none of them sound like science fiction.

“With advances in artificial intelligence and nanotechnology, we aim to invent a new generation of scientific instruments that will make the complex invisible systems in our world today visible over the next five years,” said Dario Gil, vice president of science & solutions at IBM Research in a statement.

Among the five areas IBM sees as being key in the next five years include artificial intelligence, hyperimaging and small sensors. Specifically, according to IBM:

1. In five years, what we say and write will be used as indicators of our mental health and physical wellbeing. Patterns in our speech and writing analyzed by new cognitive systems will provide tell-tale signs of early-stage mental and neurological diseases that can help doctors and patients better predict, monitor and track these diseases. At IBM, scientists are using transcripts and audio inputs from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression.

Today, it only takes about 300 words to help clinicians predict the probability of psychosis in a user. Cognitive computers can analyze a patient’s speech or written words to look for tell-tale indicators found in language, including meaning, syntax and intonation. Combing the results of these measurements with those from wearables devices and imaging systems (MRIs and EEGs) can paint a more complete picture of the individual for health professionals to better identify, understand and treat the underlying disease.

2. In five years, new imaging devices using hyperimaging technology and AI will help us see broadly beyond the domain of visible light by combining multiple bands of the electromagnetic spectrum to reveal valuable insights or potential dangers that would otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and accessible, so superhero vision can be part of our everyday experiences.

A view of the invisible or vaguely visible physical phenomena all around us could help make road and traffic conditions clearer for drivers and self-driving cars. For example, using millimeter wave imaging, a camera and other sensors, hyperimaging technology could help a car see through fog or rain, detect hazardous and hard-to-see road conditions such as black ice, or tell us if there is some object up ahead and its distance and size. Embedded in our phones, these same technologies could take images of our food to show its nutritional value or whether it’s safe to eat. A hyperimage of a pharmaceutical drug or a bank check could tell us what’s fraudulent and what’s not.

3. In the next five years, new medical labs on a chip will serve as nanotechnology health detectives– tracing invisible clues in our bodily fluids and letting us know immediately if we have reason to see a doctor. The goal is to shrink down to a single silicon chip all of the processes necessary to analyze a disease that would normally be carried out in a full-scale biochemistry lab.

The lab-on-a-chip technology could ultimately be packaged in a convenient handheld device to let people quickly and regularly measure the presence of biomarkers found in small amounts of bodily fluids, sending this information streaming into the cloud from the convenience of their home. There it could be combined with data from other IoT-enabled devices, like sleep monitors and smart watches, and analyzed by AI systems for insights. When taken together, this data set will give us an in-depth view of our health and alert us to the first signs of trouble, helping to stop disease before it progresses.

4. In five years, new, affordable sensing technologies deployed near natural gas extraction wells, around storage facilities, and along distribution pipelines will enable the industry to pinpoint invisible leaks in real-time. Networks of IoT sensors wirelessly connected to the cloud will provide continuous monitoring of the vast natural gas infrastructure, allowing leaks to be found in a matter of minutes instead of weeks, reducing pollution and waste and the likelihood of catastrophic events. Scientists at IBM are working with natural gas producers such as Southwestern Energy to explore the development of an intelligent methane monitoring system and as part of the ARPA-E Methane Observation Networks with Innovative Technology to Obtain Reductions (MONITOR) program.

5. In five years, we will use machine-learning algorithms and software to help us organize the information about the physical world to help bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a “macroscope” – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth’s complex data together to analyze it for meaning.

By aggregating, organizing and analyzing data on climate, soil conditions, water levels and their relationship to irrigation practices, for example, a new generation of farmers will have insights that help them determine the right crop choices, where to plant them and how to produce optimal yields while conserving precious water supplies. Beyond our own planet, macroscope technologies could handle, for example, the complicated indexing and correlation of various layers and volumes of data collected by telescopes to predict asteroid collisions with one another and learn more about their composition.

 

IBM has had some success with its “five in five” predictions in the past. For example, in 2012 it predicted computers will have a sense of smell. IBM says “sniffing” technology is already in use at the Metropolitan Museum of Art, working to preserve and protect priceless works of art by monitoring fluctuations in temperature, relative humidity, and other environmental conditions. “And this same technology is also being used in the agricultural industry to monitor soil conditions, allowing farmers to better schedule irrigation and fertilization schedules, saving water and improving crop yield,” IBM said.

 

In 2009 it had an expectation that buildings will sense and respond like living organisms. IBM said it is working with The U.S. General Services Administration (GSA) to develop and install advanced smart building technology in 50 of the federal government’s highest energy-consuming buildings. “Part of GSA’s larger smart building strategy, this initiative connects building management systems to a central cloud-based platform, improving efficiency and saving up to $15 million in taxpayer dollars annually. IBM is also helping the second largest school district in the U.S. become one of the greenest and most sustainable by making energy conservation and cost savings as easy as sending a text message,” IBM stated.

 

 

[Source:- Javaworld]

 

Wrists-on with Garmin’s new fenix 5 lineup at CES 2017

If you’re a serious athlete that’s been looking for a powerful multisport fitness watch, odds are you’ve stumbled across Garmin’s fēnix line of devices. While they are quite pricey, the fēnix 3 line has proven to be one of the most powerful multisport watches on the market.

At CES 2017, Garmin has unveiled three new entries to its fēnix lineup, the fēnix 5, fēnix 5S and fēnix 5X.

As the names might suggest, all three of these new devices are in the same family, so they all sport most of the same features. There are a few big differentiators between the three, though. The fēnix 5S, for instance, is a lighter, sleeker and smaller version of the standard fēnix 5. The fēnix 5 is the standard model, sporting all the same features as the 5S in a bigger form factor. The 5X is the highest-end device in the bunch, complete with preloaded wrist-based mapping.

The fēnix 5 is the standard model of the group. Measuring 47mm, it’s more compact than previous models like the fēnix 3HR, but still packs all the multisport features you’d come to expect from the series.

Garmin says the fēnix 5S is the first watch in the line designed specifically for female adventurers. Measuring just 42mm, the 5S is small and comfortable for petite wrists, without compromising any multisport features. It’s available in silver with either a white, turquoise or black silicone band color options with a mineral glass lens.

There’s also a fēnix 5S Sapphire model with a scratch-resistant sapphire lens that’s available in black with a black band, champagne with a water-resistant gray suede band, or champagne with a metal band. This model also comes with an extra silicone QuickFit band.

fenix 5X

The higher-end fēnix 5X measures 51mm and comes preloaded with TOPO US mapping, routable cycling maps and other navigation features like Round Trip Run and Round Trip Ride. With these new features, users can enter how far they’d like to run or ride, and their watch will suggest appropriate courses to choose from. The 5X will also display easy-to-read guidance cues for upcoming turns, allowing users to be aware of their route.

In addition, the 5X can use Around Me map mode to see different points of interest and other map objects within the user’s range to help users be more aware of their surroundings. This model will be available with a scratch-resistant sapphire lens.

 

[Source:- Androidauthority]

 

Asus ZenFone AR hands-on: Tango, Daydream, 8GB of RAM, oh my!

CES 2017 is in full swing and some of the coolest smartphone announcements at the show are coming from Asus. The Taiwanese manufacturer revealed a ZenFone 3variant equipped with dual cameras and optical zoom, but it’s actually the ZenFone AR that really piqued our interest, thanks to a combo of great specs and advanced features from Google.

The ZenFone AR is the first high-end Tango phone (and the second overall, after the Lenovo Phab 2 Pro), the first phone that supports Tango and Daydream VR, and the first smartphone with 8GB of RAM.

That’s a lot of premieres, so let’s take a closer look at what the Asus ZenFone brings to the table, live from CES 2017.

As mentioned, the ZenFone AR will be the second commercially available Tango-ready smartphone, but unlike the Phab 2 Pro the ZenFone AR is much sleeker looking, more manageable in the hand, and a lot less bulky.

The phone has a full metal frame that wraps around the entire perimeter of the phone and on the back there’s a very soft leather backing that feels extremely nice and also provides a lot of grip.

Also on the back is a 23MP camera, as well as the optical hardware needed to run Tango applications – this includes sensors for motion tracking and a depth sensing camera. The Tango module takes up the space where the fingerprint sensor is usually found on Asus phones, so the sensor is now placed on the front, embedded in the physical home button, which is flanked by capacitive keys.

If you’re still somehow not familiar with what Tango is, here is a very brief explanation. Tango is an augmented reality (AR) platform created by Google. Born from Google’s advanced technologies labs, Tango eventually graduated last year to become a real product. Tango-equipped phones can understand the physical space, by measuring the distance between the phone and objects in the real world. In practice, that means Tango phones can be used for AR applications like navigating through in-door spaces, but also for more recreational purposes like games. There are currently over 30 Tango apps in the Play Store, with dozens more coming this year.

Besides Tango, ZenFone AR also supports Daydream VR, Google’s virtual reality platform for mobile devices. As such, it’s compatible with Daydream View and other Daydream headsets and you’re pretty much guaranteed to have a good time using mobile VR applications on it.

The phone has all the specs you’d want on a VR-focused device, including a large, bright, and beautiful 5.7-inch Super AMOLED screen with Quad HD resolution and a Snapdragon 821 processor inside (sadly, it won’t get the brand-new Snapdragon 835, as we were hoping). The ZenFone AR will come with either 6GB of RAM or a whopping 8GB of RAM, a first for any smartphone.

All those hardware features will tax the system, so the ZenFone AR includes a vapor cooling system to help prevent the phone from overheating when using its AR and VR capabilities.

As all Daydream-ready devices, the ZenFone AR is running Android 7.0 Nougat, but not without Asus’ ZenUI customizations on top of it.

 

 

 

[Source:- Androidauthority]