Pardon the interruption of your Wireshark traces. I’d like to switch gears a bit today and talk about software testing rather than network testing.
I can’t imagine that anyone would disagree that better software with fewer bugs would be welcome. The question is how?
While start-ups and small development teams are faced with not having enough tests to check everything, large software projects actually struggle with having too many tests. Google gmail, for example, has to run 3.5 million tests before every new release. With the Agile process, there are software updates every 1-2 weeks . That leaves no time for a long and careful testing cycle. Tests have to be run while the code is still being developed. It’s like building the ship while you’re on the sea.
Keep the bad analogy, for large projects, the ship has already been built – it’s just being changed. When you drill a hole through the hull, you don’t need to check every rivet on the entire boat, you just need to ensure that hole has been plugged, though you do want to double-check everything when you’re done.
Similarly, developers need a way to check each new block of code as it’s added or changed. It’s silly to run thousands or millions of tests every time there’s a small change to the code, but there’s not been an easy way of knowing which tests are useful for any specific code change. With enough time, the test team can figure it out, but with hundreds of changes being made each day in a large, complex code base with cross-dependencies, there isn’t enough time to investigate which tests to run and run them all, too.
That’s where artificial intelligence and machine learning come in. By analyzing the code history and learning from the test results, software can figure out which tests are actually needed to check a specific code change, and can connect to the test automation system to have them run automatically for each code change. That allows changes to be tested immediately while they’re still easy to fix. At the end of the development cycle, when all of the tests are run once, there should be almost no failures and the code can be released quickly.
If that sounds like a good idea to you, I’m glad because that is what my team has been building for the past couple years. And we’re getting ready to announce our new product – Appsurify.
There’s no magic in Appsurify. It can’t read your code and magically find bugs for you. It won’t detect security holes for you. The development team still needs to create the tests and build the test infrastructure. But if you’re part of a software development or test team, our machine learning can ensure that you run the right tests at the right time.
We’ve been refining the machine learning on open source projects for the past 6 months, but now we need your help. We’re looking for beta users willing to try this out on their own software development infrastructure to make sure it works on your projects the way it works on open source projects.
**Important Read - I apologize for the shameless product plug in this posting, but I will try to make up for it by offering a free year subscription to our software to any www.NetworkDataPedia.com reader who reaches out to me now and joins our beta program.
You can find plenty more detail at appsurify.com, or contact me at firstname.lastname@example.org.
Author - D.C. Palter - DC is an old friend and super Technologist!
DC Palter is CEO and co-founder of Appsurify, applying machine learning to make software testing smarter. He was previously President and founder of Apposite Technologies, the leading developer of WAN emulation tools, and prior to that, led the market entry at Mentat (now Symantec) for the first WAN optimization products. He is also a mentor for the start-up community in Los Angeles.