How to Create Interactive JavaScript Charts from Custom Data Sets

Image result for DeCaffeinate Converts CoffeeScript To JavaScriptCharts are a great way of visualizing complex data quickly and effectively. Whether you want to identify a trend, highlight a relationship, or make a comparison, charts help you communicate with your audience in a precise and meaningful manner.

In my previous article — Getting Started with AnyChart: 10 Practical Examples — I introduced the AnyChart library and demonstrated how it is a great fit for your data visualization needs. Today, I want to dig a little deeper and look at AnyChart’s data mapping features which allow you to create beautiful charts from custom data sets with a minimum of fuss.

I also want to look at the many ways you can customize AnyChart to suit your requirements, as well as how you can change the look and feel of AnyChart charts by using themes. There are currently 17 out-of-the-box themes to choose from, or you can create your own. And if you’ve not got the best eye for design, why not buy our book to get a leg up.

As the Head of R&D at AnyChart, I could spend all day talking about this library, but now it’s time to get down to business.

Data Mapping in AnyChart

More from this author

  • Getting Started with AnyChart — 10 Practical Examples
  • Introducing GraphicsJS, a Powerful Lightweight Graphics Library

To facilitate the integration of custom data sources into charting applications, AnyChart has special objects called data sets. These objects act as intermediate containers for data. When data is stored in data sets, AnyChart can track changes to it, analyze it, and work with this data in a more robust and effective manner. In short: interactive JavaScript charts have never been easier!

No matter if you have an array of objects, an array of arrays, or a .csvfile, you can use data sets to:

  • ensure full and explicit control over the series created
  • define which column is an argument (x-axis)
  • define which columns hold values for which series
  • filter data
  • sort data

Basics of Data Mapping

The best way to learn how data mapping works in AnyChart is to look at an example. Let’s imagine an array with the following custom data set:

var rawData = [
  ["A", 5, 4, 5, 8, 1, "bad"],
  ["B", 7, 1, 7, 9, 2, "good"],
  ["C", 9, 3, 5, 4, 3, "normal"],
  ["D", 1, 4, 9, 2, 4, "bad"]
];

There’s nothing too wild going on here — this kind of custom data structure is common in a lot of existing applications. But now you want to use this array in AnyChart. With many other charting libraries you would be forced to transform the data to a format that the library can work with. Well, with AnyChart things are a lot simpler — just look what we can do. First, load the array into a data set:

var rawData = [
  ["A", 5, 4, 5, 8, 1, "bad"],
  ["B", 7, 1, 7, 9, 2, "good"],
  ["C", 9, 3, 5, 4, 3, "normal"],
  ["D", 1, 4, 9, 2, 4, "bad"]
];

var dataSet = anychart.data.set(rawData);

And then, once the data has been loaded into the data set, the real magic begins: you can now create so called views. These are data sets derived from other data sets.

var rawData = [
  ["A", 5, 4, 5, 8, 1, "bad"],
  ["B", 7, 1, 7, 9, 2, "good"],
  ["C", 9, 3, 5, 4, 3, "normal"],
  ["D", 1, 4, 9, 2, 4, "bad"]
];

var dataSet = anychart.data.set(rawData);

var view1 = dataSet.mapAs({x: 0, value: 1});
var view2 = dataSet.mapAs({x: 0, value: 2});
var view3 = dataSet.mapAs({x: 0, high: 3, low: 4});
var view4 = dataSet.mapAs({x: 0, value: 5, meta: 6});

You’ll notice that when defining a view, you determine which columns from the original array are included and what names these columns get. You can then use them to create whichever kind of charts you like. For example, here’s how to create a pie chart from the custom data in the 5th column.

Note: AnyChart needs only x and value fields to create a pie chart, but the views also contain a meta field with the data from the 6th column. You can map any number of optional fields and use them as you like. For example, these fields can contain additional data to be shown as labels or as tooltips:

anychart.onDocumentLoad(function() {
  var rawData = [
    ["A", 5, 4, 5, 8, 3, "Bad"],
    ["B", 7, 1, 7, 9, 5, "Good"],
    ["C", 9, 3, 5, 4, 4, "Normal"],
    ["D", 1, 4, 9, 2, 3, "Bad"]
  ];

  var dataSet = anychart.data.set(rawData);
  var view4 = dataSet.mapAs({x: 0, value: 5, meta: 6});

  // create chart
  var chart = anychart.pie(view4);
  chart.title("AnyChart: Pie Chart from Custom Data Set");
  chart.labels().format("{%meta}: {%Value}");
  chart.container("container").draw();
});

And this is what we end up with:

Note: You can find all of the demos in this article as a CodePen collection.

Multi-Series Combination Chart with Custom Data Set

Now, let’s see how we can use the same custom data to create a combination chart with line and range area charts on the same plot. This section is going to be very short since now you know what views are. All you need to do is choose the proper views and create the necessary series explicitly:

anychart.onDocumentLoad(function() {
  var rawData = [
    ["A", 5, 4, 5, 8, 3, "Bad"],
    ["B", 7, 1, 7, 9, 5, "Good"],
    ["C", 9, 3, 5, 4, 4, "Normal"],
    ["D", 1, 4, 9, 2, 3, "Bad"]
  ];

  var dataSet = anychart.data.set(rawData);

  var view1 = dataSet.mapAs({x: 0, value: 1});
  var view2 = dataSet.mapAs({x: 0, value: 2});
  var view3 = dataSet.mapAs({x: 0, high: 3, low: 4});

  // create chart
  var chart = anychart.line();
  // create two line series
  chart.line(view1).name("EUR");
  chart.line(view2).name("USD");
  // create range area series
  chart.line(view2).name("Trend");

  // set title and draw chart
  chart.title("AnyChart: Combined Chart from Data Set");
  chart.container("container").draw();
});

This is what it looks like:

Live Data Streaming and Filtering

And now, to showcase the beauty of views and data sets. For this we’ll create a column chart and a multi-line chart, live stream data into a data set, and accept only certain values into the column chart. Sound complicated? It isn’t really!

anychart.onDocumentLoad(function() {
  var rawData = [
    ["A", 5, 4, 2, 6, 3, "Bad"],
    ["B", 7, 2, 1, 9, 5, "Good"],
    ["C", 8, 3, 2, 9, 4, "Normal"],
    ["D", 1, 4, 1, 4, 3, "Bad"]
  ];

  dataSet = anychart.data.set(rawData);

  var view1 = dataSet.mapAs({ x: 0, value: 1 });
  var view2 = dataSet.mapAs({ x: 0, value: 2 });
  var view3 = dataSet.mapAs({ x: 0, value: 3 });
  var view4 = dataSet.mapAs({ x: 0, value: 4 });
  var view5 = dataSet.mapAs({ x: 0, value: 5 });

  // create chart
  var chart1 = anychart.line();
  // create several line series
  chart1.line(view1).name("EUR");
  chart1.line(view2).name("USD");
  chart1.line(view3).name("YEN");
  chart1.line(view4).name("CNY");

  // create column chart
  // based on filtered view
  // that accepts values of less than 5 only
  var chart2 = anychart.column(
    view5.filter("value", function(v) {
      return v < 5;
    })
  );

  // set title and draw multi-line chart
  chart1.title("Line: Streaming from Data Set");
  chart1.legend(true);
  chart1.container("lineContainer").draw();

  // set title and draw column chart
  chart2.title("Column: Filtering Stream");
  chart2.container("columnContainer").draw();
});

// streaming function
var streamId;
function stream() {
  if (streamId === undefined) {
    streamId = setInterval(function() {
      addValue();
    }, 1000);
  } else {
    clearInterval(streamId);
    streamId = undefined;
  }
}

// function to add new value and remove first one
function addValue() {
  // generate next letter/symbol as argument
  var x = String.fromCharCode(
    dataSet.row(dataSet.getRowsCount() - 1)[0].charCodeAt(0) + 1
  );
  // append row of random values to data set
  dataSet.append([
    x,
    Math.random() * 10,
    Math.random() * 10,
    Math.random() * 10,
    Math.random() * 10,
    Math.random() * 10
  ]);
  // remove first row
  dataSet.remove(0);
}

You see that we’ve created a custom data set and five derived views. Four of them are used as is to create lines, but when creating our column chart, we apply the filtering function and let only values of less than 5 into the view. Then we stream data by adding and removing rows from the main data set, and all the views automatically receive data and charts are updated — no coding is needed to implement this!

This is just the tip of the iceberg as far as data mapping goes! You can do the same with arrays of objects and CSV data, and you can sort, search, listen to changes, go through values, as well as change existing rows. There are also special views for hierarchical data utilized in Tree Maps and Gantt Charts, and such views can be searched and traversed.

Customizing Chart Visualization

AnyChart is extremely versatile when it comes to chart customization. You can change line styles, fill colors, use gradient fill or image fill on almost any element. Colors can be set using String constants, HEX or RGB notation, or a function that returns one of these values. The library also offers a number of built-in patterns, as well as an option to create patterns of your own.

Themes

The easiest way to change the look and feel of AnyChart charts is to change the theme.

In fact, you can create your own theme or use one of those that already come with the library. There are currently 17 out-of-the-box themes available in AnyChart: Coffee, Dark Blue, Dark Earth, Dark Glamour, Dark Provence, Dark Turquoise, Default Theme, Light Blue, Light Earth, Light Glamour, Light Provence, Light Turquoise, Monochrome, Morning, Pastel, Sea, Wines. The files for these themes can be obtained from the Themes Section at AnyChart CDN.

You can reference them like so:

<script src="https://cdn.anychart.com/js/latest/anychart-bundle.min.js"></script>
<script src="https://cdn.anychart.com/themes/latest/coffee.min.js"></script>

And activate the theme with just one line:

anychart.theme(anychart.themes.coffee);

Let’s use the basic chart from the previous article by way of an example.

anychart.onDocumentLoad(function() {
  // set theme referenced in scripts section from
  // https://cdn.anychart.com/themes/latest/coffee.min.js
  anychart.theme(anychart.themes.coffee);
  // create chart and set data
  var chart = anychart.column([
    ["Winter", 2],
    ["Spring", 7],
    ["Summer", 6],
    ["Fall", 10]
  ]);
  // set chart title
  chart.title("AnyChart Coffee Theme");
  // set chart container and draw
  chart.container("container").draw();
});

Now the look and feel of that chart is completely different.

Note: you can browse through all the themes and see how they work with different chart types and series on the AnyChart Themes Demo Page. Please refer to the AnyChart Documentation to learn how to create your own themes and read about other options.

Coloring

As you have seen, it’s quite trivial to color elements in the AnyChart charting library. But there’s more! It is also possible to color shapes with solid colors with opacity, use radial and linear gradients, and make lines dashed. You can apply colors by their web constant names, HEX codes, or RGB, RGBA, HSL, HSLA values — just as you can in CSS.

To showcase all of this in one powerful sample, I want to highlight another option along the way — namely that you can color elements using your own custom functions. This may come in handy in many data visualization situations, for example when you want to set a custom color depending on the value of the element. Let’s test this out on a Pareto chart. AnyChart can build this chart from raw data and calculate everything it needs.

anychart.onDocumentReady(function() {

  // create Pareto chart
  var chart = anychart.pareto([
    {x: "Defect 1", value: 19},
    {x: "Defect 2", value: 9},
    {x: "Defect 3", value: 28},
    {x: "Defect 4", value: 87},
    {x: "Defect 5", value: 14},
  ]);

  // set chart title
  chart.title("Pareto Chart: Conditional coloring");

  // set container id and draw
  chart.container("container").draw();
});

But say we want to highlight the elements that have relative frequency of less than 10%. All we need to do in this case is to add some coloring functions:

// Get Pareto column series
// and configure fill and stroke
var column = chart.getSeriesAt(0);
column.fill(function () {
  if (this.rf < 10) {
    return '#E24B26 0.5'
  } else {
    return this.sourceColor;
  }
});
column.stroke(function () {
  if (this.rf < 10) {
    return {color: anychart.color.darken('#E24B26'), dash:"5 5"};
  } else {
    return this.sourceColor;
  }
});

As is shown in the example, we can access the coloring function context with the help of the this keyword, then we create a condition and output whichever color or line setting we need. We use a simple HEX string with opacity for the color: '#E24B26 0.5'. For the line, we calculate a darker color using AnyChart’s special color transformation function and also set an appropriate dash parameter: {color: anychart.color.darken('#E24B26'), dash:"5 5"}.

Here’s what we end up with:

Default Pattern Fill

The AnyChart JavaScript charting library also provides a very flexible way to work with pattern (hatch) fills. To start with, you have 32 pattern fill types. They can be arranged into your own custom pallets, and you can directly set which one to use for a certain series, element, etc. Here’s what the code for a monochrome pie chart might look like:

anychart.onDocumentReady(function() {
  // create a Pie chart and set data
  chart = anychart.pie([
    ["Apple", 2],
    ["Banana", 2],
    ["Orange", 2],
    ["Grape", 2],
    ["Pineapple", 2],
    ["Strawberry", 2],
    ["Pear", 2],
    ["Peach", 2]
  ]);

  // configure chart using chaining calls to shorten sample
  chart.hatchFill(true).fill("white").stroke("Black");
  chart
    .labels()
    .background()
    .enabled(true)
    .fill("Black 1")
    .cornerType("Round")
    .corners("10%");

  // draw a chart
  chart.container("container").draw();
});

The only thing we did to enable the pattern fill is chart.hatchFill(true).

Custom Pattern Fill

AnyChart gets its ability to use custom patterns from the underlying GraphicsJS library. I introduced this library and showcased some really cool drawing samples in my article here on SitePoint: Introducing GraphicsJS, a Powerful Lightweight Graphics Library. The fact that AnyChart is powered by GraphicsJS allows for some really amazing results, but a picture is worth a thousand words, so let’s proceed to the next example.

We’ll create patterns using one old font that used to be rather popular among geologists – Interdex. I remember one of AnyChart customers asking us how to use this in his project, and he was very happy to learn just how easy it was.

First, if you want to use a font that is not present on your system, you need to create a web font and properly reference it in the CSS file. A sample of such file for our Interdex font can be found on the AnyChart CDN. We need to reference it along with the font management library being used and the AnyChart charting library:

<link rel="stylesheet" type="text/css" href="https://cdn.anychart.com/fonts/interdex/interdex.css"/>

<script src="https://cdn.anychart.com/js/latest/anychart-bundle.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/fontfaceobserver/2.0.7/fontfaceobserver.js"></script>

After that, we can proceed with the coding:

var fontLoad = new FontFaceObserver("Conv_interdex");

// create chart when font is loaded using
// https://cdnjs.cloudflare.com/ajax/libs/fontfaceobserver/2.0.7/fontfaceobserver.js
fontLoad.load().then(function() {
  anychart.onDocumentReady(function() {
    // enable hatch fill by default and make chart monochrome
    anychart.theme({
      chart: {
        defaultSeriesSettings: {
          base: {
            hatchFill: true,
            fill: "White",
            hoverFill: "White",
            stroke: "Black"
          }
        }
      }
    });

    // create a stage, it is needed to create a custom pattern fill
    stage = anychart.graphics.create("container");

    // create a column chart and set title
    chart = anychart.column();
    chart.title("AnyChart Custom Pattern");
    // set the data
    chart.data([
      ["Jan", 1000, 1200, 1500, 1000, 1200, 1500],
      ["Feb", 1200, 1500, 1600, 1200, 1500, 1600],
      ["Mar", 1800, 1600, 1700, 1800, 1600, 1700],
      ["Apr", 1100, 1300, 1600, 1100, 1300, 1600],
      ["May", 1900, 1900, 1500, 1900, 1900, 1500]
    ]);

    // set custom hatch palette
    // it can be populated with any number of patterns
    chart.hatchFillPalette([
      getPattern("A"),
      getPattern("N"),
      getPattern("Y"),
      getPattern("C")
    ]);

    // set container and draw chart
    chart.container(stage).draw();
  });
});

// function to create patterns
function getPattern(letter) {
  var size = 40;
  // create a text object
  var text = anychart.graphics
    .text()
    .htmlText(
      "<span " +
        "style='font-family:Conv_interdex;font-size:" +
        size +
        ";'>" +
        letter +
        "</span>"
    );
  // create a pattern object
  var pattern = stage.pattern(text.getBounds());
  //  add text to a pattern
  pattern.addChild(text);
  return pattern;
}

I’ve included all the explanations in the comments, but just to sum up – you can create custom patterns, arrange them in custom pattern palettes, which you can then apply to an entire chart. If you’re paying attention, you’ll notice that we’ve made use of the AnyChart themes mechanism to set defaults.

Here’s the end result. Lovely, I’m sure you’ll agree…

Custom Series

And the final sample in this article is for those who want even more flexibility and customization. While AnyChart strives to provide an ever-increasing number of chart types out of the box, data visualization is an enormous field, and each project has different requirements. In order to make (and keep) everyone as happy as possible and provide a way to create the visualizations of their choice, AnyChart open-sourced the JavaScript drawing library of GraphicsJS and also opened the source of the AnyChart JavaScript charting library itself.

But if you really, really want to go with 100% custom drawing, you’ll have to do a lot of work if you decide to fork the source code. For example, customizing might be a pain, and you could potentially have problems merging future versions if AnyChart doesn’t accept your pull request.

Luckily, there is a third option. With most of AnyChart’s basic chart types, you can override the rendering function and change the appearance of chart elements, while keeping all of the other things AnyChart provides.

Below you will find a code sample that shows how to transform a widely used range column chart (which is supported in the AnyChart charting library), into a less common cherry chart (which isn’t). The main thing to bare in mind when you override AnyChart’s rendering function, is that this function calculates everything from values supplied to a series, and your own (custom) series should be created out of a series with the same number of values.

anychart.onDocumentReady(function() {
  // create a chart
  var chart = anychart.cartesian();
  chart.xAxis().title("Month");
  chart.yAxis().title("Cherry price");

  // create a data set
  var data = [
    { x: "Apr", low: 29, high: 37 },
    { x: "May" },
    { x: "Jun", low: 29, high: 47 },
    { x: "Jul", low: 12, high: 27 },
    { x: "Aug", low: 20, high: 33, color: "#ff0000" },
    { x: "Sep", low: 35, high: 44 },
    { x: "Oct", low: 20, high: 31 },
    { x: "Nov", low: 44, high: 51 }
  ];

  // create a range column series
  var series = chart.rangeColumn(data);
  // set a meta field to use as a cherry size
  // series.meta("cherry", 50);
  // set a meta field to use as a stem thickness
  // series.meta("stem", 1);

  // optional: configurable select fill
  series.selectFill("white");

  // call a custom function that changes series rendering
  cherryChartRendering(series);

  // set container id for the chart and initiate chart drawing
  chart.container("container").draw();
});

// custom function to change range column series rendering to
// cherry chart with a special value line markers
function cherryChartRendering(series) {
  // cherry fill color
  series.fill(function() {
    // check if color is set for the point and use it or series color
    color = this.iterator.get("color") || this.sourceColor;
    return anychart.color.lighten(color, 0.25);
  });
  // cherry stroke color
  series.stroke(function() {
    // check if color is set for the point and use it or series color
    color = this.iterator.get("color") || this.sourceColor;
    return anychart.color.darken(color, 0.1);
  });

  // set rendering settings
  series.rendering()// set point function to drawing
  .point(drawer);
}

// custom drawer function to draw a cherry chart
function drawer() {
  // if value is missing - skip drawing
  if (this.missing) return;

  // get cherry size or set default
  var cherry = this.series.meta("cherry") || this.categoryWidth / 15;
  // get stem thickness or set default
  var stem = this.series.meta("stem") || this.categoryWidth / 50;

  // get shapes group
  var shapes = this.shapes || this.getShapesGroup(this.pointState);
  // calculate the left value of the x-axis
  var leftX = this.x - stem / 2;
  // calculate the right value of the x-axis
  var rightX = leftX + stem / 2;

  shapes["path"]
    // resets all 'path' operations
    .clear()
    // draw bulb
    .moveTo(leftX, this.low - cherry)
    .lineTo(leftX, this.high)
    .lineTo(rightX, this.high)
    .lineTo(rightX, this.low - cherry)
    .arcToByEndPoint(leftX, this.low - cherry, cherry, cherry, true, true)
    // close by connecting the last point with the first
    .close();
}

Here’s what we end up with

Conclusion

I hope you’ve found this look at AnyChart’s more advanced functionality useful and that this has given you plenty ideas and inspiration for your own data vizualizations.

If you’ve not tried AnyChart yet, I urge you to give it a go. Our latest release (version 7.14.0) added eight new chart types, five new technical indicators, Google Spreadsheet data loader, marquee select and zoom, text wrap and other cool new features.

And if you’re using AnyChart in your projects and have any comments or questions regarding the library, I’d love to hear these in the comments below.

[“Source-ndtv”]

DeCaffeinate Converts CoffeeScript To JavaScript

Image result for DeCaffeinate Converts CoffeeScript To JavaScriptImage result for DeCaffeinate Converts CoffeeScript To JavaScriptA new tool lets you automatically convert your CoffeeScript source to modern JavaScript. Decaffeinate is available now on GitHub.

CoffeeScript has been relatively successful because its code compiles one-to-one into JavaScript. Compiled output remains readable, passes through JavaScript Lint without warnings, will work in every JavaScript runtime. However, with JavaScript ES6 many of the things that CoffeeScript supplied are standard in JavaScript.

The advantage offered by ECMAScript 2015 (ES6) is that it defines the standard for the JavaScript implementation used in web browsers, is supported natively, and as it is an official standard it overcomes the need to rely on a project such as CoffeeScript that lacks such global definition and support.

CoffeeScript still has a lot of supporters, even taking ES6 into account. Developers praise its conciseness, and point out that CoffeeScript can compile to ES6 as a separate build step. Its supporters argue that CoffeeScript programs have fewer lines of code, and avoided contentious issues such as problems with undeclared vars, and JavaScript’s distinction between == and ===.

Those backing decaffeinate and ES6 say that despite such reservations, the momentum is with ES6. They credit many of ES6’s good points as having been derived from CoffeeScript. The strongest argument for conversion, and for using decaffeinate, is that as a redundant technology, CoffeeScript will eventually become defunct.

 

decaff

 

The decaffeinate project can be used to convert a single file or a whole project as a batch operation. The bulk conversion tool can check a codebase for decaffeinate-readiness, and once the code (or a part of it) is ready, will run the conversion and do some of the cleaning up necessary after the conversion.

There’s a fork of the CoffeeScript implementation with some small patches that are useful to the decaffeinate project.

 

coffeescript

 

Nodejs v6 Released

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin.

[“Source-ndtv”]

A Beginner’s Guide to Testing Functional JavaScript

Functional programming and testing. Maybe you’ve given them a try in isolation, but somehow you never made either a part of your regular practice. They may sound innocent by themselves, but together testing and functional programming can create an irresistible temptation, almost compelling you to write cleaner, tighter, more maintainable code.

Testing Functional JavaScript — A crash test dummy sits at a computer with a mug of coffee

Well the good news is that working with both techniques together can offer some real advantages. In fact, once you’ve tasted how sweet this combination can be, you might find yourself as addicted to it as I am, and I’d be willing to bet that you’ll be coming back for more.

In this article, I’ll introduce you to the principles of testing functional JavaScript. I’ll show you how to get up and running with the Jasmine framework and build out a pure function using a test-driven approach.

Testing is about making sure that the code in your application does what you expect it to do, and keeps doing what you expect it to do when you make changes so you have a working product when you’re done. You write a test that defines your expected functionality under a defined set of circumstances, run that test against the code, and if the result isn’t what the test says it should be, you get a warning. And you keep getting that warning until you’ve fixed your code.

Then you get the reward.

And yes, it will make you feel good.

Testing comes in a lot of flavors, and there is room for healthy debate about where the borders are drawn, but in a nutshell:

  • Unit tests validate the functionality of isolated code
  • Integration tests verify the flow of data and the interaction of components
  • Functional tests look at the behavior of the overall application

Note: Don’t get distracted by the fact that there’s a type of testing called functional testing. That’s not what we’ll be focusing on in this article about testing functional JavaScript. In fact, the approach you’ll use for functional testing of the overall behavior of an application probably won’t change all that much whether or not you’re using functional programming techniques in your JavaScript. Where functional programming really helps out is when you’re building your unit tests.

You can write a test at any point in the coding process, but I’ve always found that its most efficient to write a unit test before writing the function you’re planning to test. This practice, known as test-driven development (TDD), encourages you to break down the functionality of your application before you start writing and determine what results you want from each section of code, writing the test first, then coding to produce that result.

A side benefit is that TDD often forces you to have detailed conversations with the people who are paying you to write your programs, to make sure that what you’re writing is actually what they’re looking for. After all, it’s easy to make a single test pass. What’s hard is determining what to do with all the likely inputs you’re going to encounter and handle them all correctly without breaking things.

Why Functional?

As you can imagine, the way you write your code has a lot to do with how easy it is to test. There are some code patterns, such as tightly coupling the behavior of one function to another, or relying heavily on global variables, that can make code much more difficult to unit test. Sometimes you may have to use inconvenient techniques such as “mocking” the behavior of an external database or simulating a complicated runtime environment in order to establish testable parameters and results. These situations can’t always be avoided, but it is usually possible to isolate the places in the code where they are required so that the rest of the code can be tested more easily.

Functional programming allows you to deal with the data and the behavior in your application independently. You build your application by creating a set of independent functions that each work in isolation and don’t rely on external state. As a result, your code becomes almost self-documenting, tying together small clearly defined functions that behave in consistent and understandable ways.

Functional programming is often contrasted against imperative programming and object-oriented programming. JavaScript can support all of these techniques, and even mix-and-match them. Functional programming can be a worthwhile alternative to creating sequences of imperative code that track the state of the application across multiple steps until a result is returned. Or building your application out of interactions across complex objects that encapsulate all of the methods that apply to a specific data structure.

How Pure Functions Work

Functional programming encourages you to build your application out of tiny, reusable, composable functions that just do one specific thing and return the same value for the same input every single time. A function like this is called a pure function. Pure functions are the foundation of functional programming, and they all share these three qualities:

  • Don’t rely on external state or variables
  • Don’t cause side effects or alter external variables
  • Always return the same result for the same input

Another advantage of writing functional code is that it makes it much easier to do unit testing. The more of your code that you can unit test, the more comfortably you can count on your ability to refactor the code in the future without breaking essential functionality.

What Makes Functional Code Easy to Test?

If you think about the concepts that we just discussed, you probably already see why functional code is easier to test. Writing tests for a pure function is trivial, because every single input has a consistent output. All you have to do is set the expectations and run them against the code. There’s no context that needs to be established, there are no inter-functional dependencies to keep track of, there’s no changing state outside of the function that needs to be simulated, and there are no variable external data sources to be mocked out.

There are a lot of testing options out there ranging from full-fledged frameworks to utility libraries and simple testing harnesses. These include Jasmine, Mocha, Enzyme, Jest, and a host of others. Each one has different advantages and disadvantages, best use cases, and a loyal following. Jasmine is a robust framework that can be used in a wide variety of circumstances, so here’s a quick demonstration of how you might use Jasmine and TDD to develop a pure function in the browser.

You can create an HTML document that pulls in the Jasmine testing library either locally or from a CDN. An example of a page including the Jasmine library and test runner might look something like this:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Jasmine Test</title>
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jasmine/2.6.1/jasmine.min.css">
  </head>
  <body>

    <script src="https://cdnjs.cloudflare.com/ajax/libs/jasmine/2.6.1/jasmine.min.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/jasmine/2.6.1/jasmine-html.min.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/jasmine/2.6.1/boot.min.js"></script>
  </body>
</html>

This brings in the Jasmine library, along with the Jasmine HTML boot script and styling. In this case the body of the document is empty, waiting for your JavaScript to test, and your Jasmine tests.

Testing Functional JavaScript — Our First Test

To get started, let’s write our first test. We can do this in a separate document, or by including it inside a <script> element on the page. We’re going to use the describe function defined by the Jasmine library to describe the desired behavior for a new function we haven’t written yet.

The new function we’re going to write will be called isPalindrome and it will return true if the string passed in is the same forwards and backwards, and return false otherwise. The test will look like this:

describe("isPalindrome", () => {
  it("returns true if the string is a palindrome", () => {
    expect(isPalindrome("abba")).toEqual(true);
  });
});

When we add this to a script in our page and load it a browser, we get a working Jasmine report page showing an error. Which is what we want at this point. We want to know that the test is being run, and that it’s failing. That way our approval-hungry brain knows that we have something to fix.

So let’s write a simple function in JavaScript, with just enough logic to get our test to pass. In this case it’s just going to be a function that makes our one test pass by returning the value expected.

const isPalindrome = (str) => true;

Yes, really. I know it looks ridiculous, but hang in there with me.

When the test runner again, it passes. Of course. But obviously this simple code does not do what we might expect a palindrome tester to do. We’ve written the minimal amount of code that makes the test pass. But we know that our code would fail to evaluate palindromes effectively. What we need at this point are additional expectations. So let’s add another assertion to our describe function:

describe("isPalindrome", () => {
  it("returns true if the string is a palindrome", () => {
    expect(isPalindrome("abba")).toEqual(true);
  });
  it("returns false if the string isn't a palindrome", () => {
    expect(isPalindrome("Bubba")).toEqual(false);
  });
});

Reloading our page now makes the test output turn red and fail. We get a messages saying what the problem is, and the test result turns red.

Red!

Our brains sense that there’s a problem.

Of course there is. Now our simple isPalindrome function that just returns true every time has been demonstrated not to work effectively against this new test. So let’s update isPalindrome adding the ability to compare a string passed in forward and backward.

const isPalindrome = (str) => {
  return str
    .split("")
    .reverse()
    .join("") === str;
};

Testing Is Addictive

Green again. Now that’s satisfying. Did you get that little dopamine rush when you reloaded the page?

With these changes in place, our test passes again. Our new code effectively compares the forward and backward string, and returns true when the string is the same forward and backward, and false otherwise.

This code is a pure function because it is just doing one thing, and doing it consistently given a consistent input value without creating any side effects, making any changes to variables outside of itself, or relying on the state of the application. Every time you pass this function a string, it does a comparison between the forward and backward string, and returns the result regardless of when or how it is called.

You can see how easy that kind of consistency makes this this function to unit test. In fact, writing test-driven code can encourage you to write pure functions because they are so much easier to test and modify.

And you want the satisfaction of a passing test. You know you do.

Refactoring a Pure Function

At this point it’s trivial to add additional functionality, such as handling non-string input, ignoring differences between uppercase and lowercase letters, etc. Just ask the product owner how they want the program to behave. Since we already have tests in place to verify that strings will be handled consistently, we can now add error checking or string coercion or whatever behavior we like for non-string values.

For example, let’s see what happens if we add a test for a number like 1001 that might be interpreted a palindrome if it were a string:

describe("isPalindrome", () => {
  it("returns true if the string is a palindrome", () => {
    expect(isPalindrome("abba")).toEqual(true);
  });
  it("returns false if the string isn't a palindrome", () => {
    expect(isPalindrome("Bubba")).toEqual(false);
  });
  it("returns true if a number is a palindrome", () => {
    expect(isPalindrome(1001)).toEqual(true);
  });
});

Doing this gives us a red screen and a failing test again because our current is isPalindrome function doesn’t know how to deal with non-string inputs.

Panic sets in. We see red. The test is failing.

But now we can safely update it to handle non-string inputs, coercing them into strings and checking them that way. We might come up with a function that looks a little bit more like this:

const isPalindrome = (str) => {
  return str
    .toString()
    .split("")
    .reverse()
    .join("") === str.toString();
};

And now all our tests pass, we’re seeing green, and that sweet, sweet dopamine is flooding into our our test-driven brains.

By adding toString() to the evaluation chain, we’re able to accommodate non-string inputs and convert them to strings before testing. And best of all, because our other tests are still being run every time, we can be confident that we haven’t broken the functionality we got before by adding this new capability to our pure function. Here’s what we end up with:

Play around with this test, and start writing some of your own, using Jasmine or any other testing library that suits you.

Once you incorporate testing into your code design workflow, and start writing pure functions to unit test, you may find it hard to go back to your old life. But you won’t ever want to.

This article was peer reviewed by Vildan Softic. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!

[“Source-ndtv”]

US tech industry says immigration order affects their operations

Trump inauguration

The U.S. tech industry has warned that a temporary entry suspension on certain foreign nationals introduced on Friday by the administration of President Donald Trump will impact these companies’ operations that are dependent on foreign workers.

The Internet Association, which has a number of tech companies including Google, Amazon, Facebook and Microsoft as its members, said that Trump’s executive order limiting immigration and movement into the U.S. has troubling implications as its member companies and firms in many other industries include legal immigrant employees who are covered by the orders and will not be able to return back to their jobs and families in the U.S.

“Their work benefits our economy and creates jobs here in the United States,” said Internet Association President and CEO Michael Beckerman in a statement over the weekend.

Executives of a number of tech companies like Twitter, Microsoft and Netflix have expressed concern about the executive order signed by Trump, which suspended for 90 days entry into the U.S. of persons from seven Muslim-majority countries – Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen – as immigrants and non-immigrants. The Trump administration has described the order as a move to prevent foreign terrorist entry into the U.S.

Tech companies like Uber, Apple, Microsoft and Google are in touch with employees affected by the order, according to reports. Uber is working on a scheme to compensate some of its drivers who come from the listed countries and had taken long breaks to see their extended families and are now unable to come back to the U.S., wrote CEO Travis Kalanick, who is a member of Trump’s business advisory group.

“As an immigrant and as a CEO, I’ve both experienced and seen the positive impact that immigration has on our company, for the country, and for the world,” wrote Satya Nadella, Microsoft CEO, in an online post over the weekend. “We will continue to advocate on this important topic.” Netflix CEO Reed Hastings wrote in a Facebook post that “Trump’s actions are hurting Netflix employees around the world, and are so un-American it pains us all.”

The tech industry is also concerned about further moves by the government on immigration policy that could place restrictions on visas for the entry of people who help these companies run their operations and develop products and services. The H-1B visa program have been criticized for replacing U.S. workers.

Microsoft’s Chief Legal Officer Brad Smith said in a note to employees on Saturday that the company believes in “a strong and balanced high-skilled immigration system.”

 

[Source:- Javaworld]

 

Google creates ‘crisis fund’ following US immigration ban

Image result for Google creates ‘crisis fund’ following US immigration ban

Tech giant Google has created a US$2 million crisis fund in response to US president Donald Trump’s immigration ban.

Google staff are also being invited to top up the fund, with the money going towards the American Civil Liberties Union (ACLU), Immigrant Legal Resource Center (ILRC), International Rescue Committee (IRC), and the UN High Commissioner for Refugees (UNHCR).

“We chose these organisations for their incredible efforts in providing legal assistance and support services for immigrants, as well as their efforts on resettlement and general assistance for refugees globally,” a Google spokesperson said.

The announcement follows requests by Google CEO, Sundar Pichai last week for staff travelling overseas to come back to the US. More than 100 staff are affected by President Trump’s executive order on immigration.

Since 2015, Google has given more than US$16 million to organisations focused on humanitarian aid for refugees on the ground, WiFi in refugee camps, and education for out of school refugee children in Lebanon, the spokesperson said.

Microsoft CEO Satya Nadella has also responded to the crisis, saying that as an immigrant himself, he has experienced the positive impact that immigration has on the company, the country and the world.

Nadella said Microsoft was providing legal advice and assistance to 76 staff who have a US visa and are citizens of Syria, Iraq, Iran, Libya, Somalia, Yemen, and Sudan.

In an email sent to Microsoft staff, US-based director, Brad Smith said that Microsoft believes in a strong and balance skilled immigration system.

“We also believe in broader-immigration opportunities, like the protections for talented and law-abiding young people under the Deferred Access for Childhood Arrivals (DACA) program. We believe that immigration laws can and should protect the public without sacrificing people’s freedom of expression or religion. And we believe in the importance of protecting legitimate and law-abiding refugees whose very lives may be at stake in immigration proceedings,” he said.

 

 

[Source:- Javaworld]

GitLab database goes out after spam attack

GitLab database goes out after spam attack

Code-hosting site GitLab has suffered an outage after sustaining a “serious” incident on Tuesday with one of its databases that has required emergency maintenance.

The company today said it lost six hours of database data, including issues, merge requests, users, comments, and snippets, for GitLab.com and was in the process restoring data from a backup. Data was accidentally deleted, according to a Twitter message.

“Losing production data is unacceptable, and in a few days we’ll post the five whys of why this happened and a list of measures we will implement,” GitLab said in a bulletin this morning. Git.wiki repositories and self-hosted installations were unaffected.

The restoration means any data between 17:20 UTC and 23:25 UTC from the database is lost by the time GitLab.com goes live again. Providing a chronology of events, GitLab said it detected Monday that spammers were hammering its database by creating snippets and rendering it unstable. GitLab blocked the spammers based on an IP address and removed a user from using a repository as a form of CDN. This resulted in 47,000 IPs signing in using the same account and causing a high database load, and GitLab removed the users for spamming.

The company provided a statement this morning: “This outage did not affect our Enterprise customers or the wide majority of our users. As part of our ongoing recovery efforts, we are actively investigating a potential data loss. If confirmed, this data loss would affect less than one percent of our user base, and specifically peripheral metadata that was written during a six-hour window,” the company said. “We have been working around the clock to resume service on the affected product, and set up long-term measures to prevent this from happening again. We will continue to keep our community updated through Twitter, our blog and other channels.”

While dealing with the problem, GitLab found database replication lagged far behind, effectively stopping. “This happened because there was a spike in writes that were not processed on time by the secondary database.” GitLab has been dealing with a series of database issues, including a refusal to replicate.

GitLab.com went down at 6:28 pm PST on Tuesday and was back up at 9:57 am PST today, said Tim Anglade, interim vice president of marketing at the company.

 

 

[Source:- Javaworld]

 

Go 1.8 goes for efficiency and convenience

Go 1.8 goes for efficiency and convenience

Go 1.8, the next version of Google’s open source language, is moving toward general availability, with a release candidate featuring improvements in compilation and HTTP. The final Version 1.8 is due in February.

According to draft notes, the release candidate features updates to the compiler back end for more efficient code. The back end, initially developed for Go 1.7 for 64-bit x86 systems, is based on static single assignment (SSA) form to generate more efficient code and to serve as a platform for optimizations like bounds check elimination. It now works on all architectures.

“The new back end reduces the CPU time required by our benchmark programs by 20 to 30 percent on 32-bit ARM systems,” the release notes say. “For 64-bit x86 systems, which already used the SSA back end in Go 1.7, the gains are a more modest 0 to 10 percent. Other architectures will likely see improvements closer to the 32-bit ARM numbers.”

Version 1.8 also introduces a new compiler front end as a foundation for future performance enhancements, and it features shorter garbage collection pauses by eliminating “stop the world” stack rescanning.

The release notes also cite HTTP2 Push support, in which the net/http package can send HTTP/2 server pushes from a handler, which responds to an HTTP request. Additionally, HTTP server shutdown can be enabled in a “graceful” fashion via a Server.Shutdown method and abruptly using a Server.Close method.

Version 1.8 adds support for the Mips 32-bit architecture on Linux and offers more context support for packages like Server.Shutdown, database/sql, and .net.resolver. Go’s sort package adds a convenience function, Slice, to sort a slice given a less function. “In many cases this means that writing a new sorter type is not necessary.” Runtime and tools in Go 1.8 support profiling of contended mutexes, which provide a mutual exclusion lock.

Most of the upgrade’s changes are in the implementation of the toolchain, runtime, and libraries. “There are two minor changes to the language specification,” the release notes state. “As always, the release maintains the Go 1 promise of compatibility. We expect almost all Go programs to continue to compile and run as before.” Language changes include conversion of a value from one type to another, with Go tags now ignored. Also, the language specification now only requires that implementations support up to 16-bit exponents in floating-point constants.

 

 

[Source:- JW]

Oracle to Java devs: Stop signing JAR files with MD5

Oracle to Java devs: Stop signing JAR files with MD5

Starting in April, Oracle will treat JAR files signed with the MD5 hashing algorithm as if they were unsigned, which means modern releases of the Java Runtime Environment (JRE) will block those JAR files from running. The shift is long overdue, as MD5’s security weaknesses are well-known, and more secure algorithms should be used for code signing instead.

“Starting with the April Critical Patch Update releases, planned for April 18, 2017, all JRE versions will treat JARs signed with MD5 as unsigned,” Oracle wrote on its Java download page.

Code-signing JAR files bundled with Java libraries and applets is a basic security practice as it lets users know who actually wrote the code, and it has not been altered or corrupted since it was written. In recent years, Oracle has been beefing up Java’s security model to better protect systems from external exploits and to allow only signed code to execute certain types of operations. An application without a valid certificate is potentially unsafe.

Newer versions of Java now require all JAR files to be signed with a valid code-signing key, and starting with Java 7 Update 51, unsigned or self-signed applications are blocked from running.

Code signing is an important part of Java’s security architecture, but the MD5 hash weakens the very protections code signing is supposed to provide. Dating back to 1992, MD5 is used for one-way hashing: taking an input and generating a unique cryptographic representation that can be treated as an identifying signature. No two inputs should result in the same hash, but since 2005, security researchers have repeatedly demonstrated that the file could be modified and still have the same hash in collisions attacks. While MD5 is no longer used for TLS/SSL—Microsoft deprecated MD5 for TLS in 2014—it remains prevalent in other security areas despite its weaknesses.

With Oracle’s change, “affected MD-5 signed JAR files will no longer be considered trusted [by the Oracle JRE] and will not be able to run by default, such as in the case of Java applets, or Java Web Start applications,” Erik Costlow, an Oracle product manager with the Java Platform Group, wrote back in October.

Developers need to verify that their JAR files have not been signed using MD5, and if it has, re-sign affected files with a more modern algorithm. Administrators need to check with vendors to ensure the files are not MD5-signed. If the files are still running MD5 at the time of the switchover, users will see an error message that the application could not go. Oracle has already informed vendors and source licensees of the change, Costlow said.

In cases where the vendor is defunct or unwilling to re-sign the application, administrators can disable the process that checks for signed applications (which has serious security implications), set up custom Deployment Rule Setsfor the application’s location, or maintain an Exception Site List, Costlow wrote.

There was plenty of warning. Oracle stopped using MD5 with RSA algorithm as the default JAR signing option with Java SE6, which was released in 2006. The MD5 deprecation was originally announced as part of the October 2016 Critical Patch Update and was scheduled to take effect this month as part of the January CPU. To ensure developers and administrators were ready for the shift, the company has decided to delay the switch to the April Critical Patch Update, with Oracle Java SE 8u131 and corresponding releases of Oracle Java SE 7, Oracle Java SE 6, and Oracle JRockit R28.

“The CA Security Council applauds Oracle for its decision to treat MD5 as unsigned. MD5 has been deprecated for years, making the move away from MD5 a critical upgrade for Java users,” said Jeremy Rowley, executive vice president of emerging markets at Digicert and a member of the CA Security Council.

Deprecating MD5 has been a long time coming, but it isn’t enough. Oracle should also look at deprecating SHA-1, which has its own set of issues, and adopt SHA-2 for code signing. That course of action would be in line with the current migration, as major browsers have pledged to stop supporting websites using SHA-1 certificates. With most organizations already involved with the SHA-1 migration for TLS/SSL, it makes sense for them to also shift the rest of their certificate and key signing infrastructure to SHA-2.

The good news is that Oracle plans to disable SHA-1 in certificate chains anchored by roots included by default in Oracle’s JDK at the same time MD5 gets deprecated, according to the JRE and JDK Crypto Roadmap, which outlines technical instructions and information about ongoing cryptographic work for Oracle JRE and Oracle JDK. The minimum key length for Diffie-Hellman will also be increased to 1,024 bits later in 2017.

The road map also claims Oracle recently added support for the SHA224withDSA and SHA256withDSA signature algorithms to Java 7, and disabled Elliptic Curve (EC) for keys of less than 256 bits for SSL/TLS for Java 6, 7, and 8.

 

 

[Source:- JW]

Attackers start wiping data from CouchDB and Hadoop databases

Data-wiping attacks have hit exposed Hadoop and CouchDB databases.

It was only a matter of time until ransomware groups that wiped data from thousands of MongoDB databases and Elasticsearch clusters started targeting other data storage technologies. Researchers are now observing similar destructive attacks hitting openly accessible Hadoop and CouchDB deployments.

Security researchers Victor Gevers and Niall Merrigan, who monitored the MongoDB and Elasticsearch attacks so far, have also started keeping track of the new Hadoop and CouchDB victims. The two have put together spreadsheets on Google Docs where they document the different attack signatures and messages left behind after data gets wiped from databases.

In the case of Hadoop, a framework used for distributed storage and processing of large data sets, the attacks observed so far can be described as vandalism.

That’s because the attackers don’t ask for payments to be made in exchange for returning the deleted data. Instead, their message instructs the Hadoop administrators to secure their deployments in the future.

According to Merrigan’s latest count, 126 Hadoop instances have been wiped so far. The number of victims is likely to increase because there are thousands of Hadoop deployments accessible from the internet — although it’s hard to say how many are vulnerable.

The attacks against MongoDB and Elasticsearch followed a similar pattern. The number of MongoDB victims jumped from hundreds to thousands in a matter of hours and to tens of thousands within a week. The latest count puts the number of wiped MongoDB databases at more than 34,000 and that of deleted Elasticsearch clusters at more than 4,600.

A group called Kraken0, responsible for most of the ransomware attacks against databases, is trying to sell its attack toolkit and a list of vulnerable MongoDB and Elasticsearch installations for the equivalent of US$500 in bitcoins.

The number of wiped CouchDB databases is also growing rapidly, reaching more than 400 so far. CouchDB is a NoSQL-style database platform similar to MongoDB.

Unlike the Hadoop vandalism, the CouchDB attacks are accompanied by ransom messages, with attackers asking for 0.1 bitcoins (around $100) to return the data. Victims are advised against paying because, in many of the MongoDB attacks, there was no evidence that attackers had actually copied the data before deleting it.

Researchers from Fidelis Cybersecurity have also observed the Hadoop attacks and have published a blog post with more details and recommendations on securing such deployments.

The destructive attacks against online database storage systems are not likely to stop soon because there are other technologies that have not yet been targeted and that might be similarly misconfigured and left unprotected on the internet by users.

 

 

[Source:- JW]

Google open-sources test suite to find crypto bugs

Google open-sources test suite to find crypto bugs

Working with cryptographic libraries is hard, and a single implementation mistake can result in serious security problems. To help developers check their code for implementation errors and find weaknesses in cryptographic software libraries, Google has released a test suite as part of Project Wycheproof.

“In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long,” Google security engineers Daniel Bleichenbacher and Thai Duong, wrote in a post announcing the project on the Google Security blog.

Named after Australia’s Mount Wycheproof, the world’s smallest mountain, Wycheproof provides developers with a collection of unit tests that detect known weaknesses in cryptographic algorithms and check for expected behaviors. The first set of tests is written in Java because Java has a common cryptographic interface and can be used to test multiple providers.

“We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means,” Bleichenbacker and Duong wrote.

The suite can be used to test such cryptographic algorithms as RSA, elliptic curve cryptography, and authenticated encryption, among others. The project also has ready-to-use tools to check Java Cryptography Architecture providers, such as Bouncy Castle and the default providers in OpenJDK. The engineers said they are converting the tests into sets of test vectors to simplify the process of porting them to other languages.

The tests in this release are low-level and should not be used directly, but they still can be applied for testing the algorithms against publicly known attacks, the engineers said. For example, developers can use Wycheproof to verify whether algorithms are vulnerable to invalid curve attacks or biased nonces in digital signature schemes.

So far the project has been used to run more than 80 test cases and has identified 40-plus vulnerabilities, including one issue where the private key of DSA and ECDHC algorithms could be recovered under specific circumstances. The weakness in the algorithm was present because libraries were not checking the elliptic curve points they received from outside sources.

“Encodings of public keys typically contain the curve for the public key point. If such an encoding is used in the key exchange, then it is important to check that the public and secret key used to compute the shared ECDH secret are using the same curve. Some libraries fail to do this check,” according to the available documentation.

Cryptographic libraries can be quite difficult to implement, and attackers frequently look for weak cryptographic implementations rather than trying to break the actual mathematics underlying the encryption. With Wycheproof, developers and users can check their libraries against a large number of known attacks without having to dig through academic papers to find out what kind of attacks they need to worry about.

The engineers looked through public cryptographic literature and implemented known attacks to build the test suite. However, developers should not consider the suite to be comprehensive or able to detect all weaknesses, because new weaknesses are always being discovered and disclosed.

“Project Wycheproof is by no means complete. Passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tries to detect,” the engineers wrote.

Wycheproof comes two weeks after Google released a fuzzer to help developers discover programming errors in open source software. Like OSS-Fuzz, all the code for Wycheproof is available on GitHub. OSS-Fuzz is still in beta, but it has already worked through 4 trillion test cases and uncovered 150 bugs in open source projects since it was publicly announced.

 

 

[Source:- JW]