Monday, April 11, 2011

UK EMEA Lab Notes - March 2011 - Ian Hyndman

The Wild List
The Wild List was invented in 1993 when computer viruses started to become a problem. Back then viruses were simple things and were relatively easy to contain.

The WildList is a compilation of sample viruses that have been submitted by security professionals from around the world. It is published each month to a select group of subscribers. Contributors can be any security professional, but the sample must be submitted by at least two respected sources before it will be included in the list.

As you might imagine, not everyone has the capacity to harvest and identify malware, so the majority of samples on the lists naturally come from anti-virus vendors. And it is undoubtedly a good thing that these vendors participate; they see far more new threats than anyone else.

In the industry, the timing of submissions to the WildList is an issue that causes heated discussion because many people believe that samples may be withheld from the list until the vendor has a solution in place. By submitting samples only after a solution has been prepared, a competitive advantage is created for the vendor.

My point, however, relates specifically to malware testing, and the broad impact of this delay on testing practices.

Because the samples are typically about a month old when published, the validity of conducting testing using the WildList as a basis of real-time or real-world scenario testing is flawed. The WildList is effectively a month out of date (by comparison with the real-world) and two or more of the participating vendors may already have fixes in place for the viruses listed.

An article by Trend Micro states that new threats are now emerging at the rate of one every 1.5 seconds, and as such, testing methodologies should be looking at change to keep up.

I’m not suggesting that the WIldList should be done away with. Many highly respected companies use it, and contribute to it in good ways, and it’s an effective industry tool, but I believe that it would be better used as a regression tool rather than a front line tool – for testing purposes.

Quality over Quantity
One method widely used for malware testing is to select a 50,000 sample repository and run it against the product under test. These results may give some really good marketing outcomes – think; “This product detected 49,995 out of 50k samples.” But, the real question that should be asked is, of those 50,000 samples how many are target specific?

If I am running a test on a Windows 7 64bit OS, are there samples in my list that are designed specifically to circumvent flaws in Windows 2000? If so, what benefit did that test hold in my scenario? One hundred samples that are known to target Windows 7 would give greater credibility to the results than 40,000 random samples.

The Wild
As the Trend Micro article (plus many others) shows, the rate of change for threats is always increasing. If the rate of threat increase is as bad as one every 1.5 seconds the industry needs to look at how they can protect the consumer in the smallest possible time. The consumer needs to know that the anti-malware vendors are looking at providing protection that is right now, rather than threats found a month ago.

Refreshingly, there has been a shift in emphasis from some vendors. They have started to look at the threats’ behaviour instead of the signatures. This is a great step forward because a Trojan (for example) will always be a Trojan and display certain characteristics as it tries to execute on the system - even if the vendor doesn’t have that particular sample on file.

(I accept that this does raise questions about some of the latest worms being able to change themselves to hide and avoid detection, but for this discussion I’m generalising about the majority of threats, not selected exceptions).

Malware testing
Regardless of the method used, any in malware test can only be considered a snapshot in time. A product only passes a specific test at a specific point in time. By the time the test report is generated, hundreds of new threats have found their way in to the wild.

The best way of truly gauging how a product copes in the wild is to keep it running. Continuous testing over a sustained period will give a much better indication of the product’s capabilities. No one product is going to come out on top every day. Different products have different strengths and these will depend on the threats that are targeting that particular machine at that particular time.

This is just one option from a host of possible methodologies. No single test can be the definitive for all scenarios, but I do feel that with the new breed of threats on the horizon we need to move away from using the WildList as the only testing benchmark.

What should the new benchmark be? Answers on a postcard please…

Australian Lab Notes - March 2011 - Steve Turvey

2012 the year of the CME !

I caught one of my kids watching the disaster movie 2012 the other day. The professional in me considers this film to be scientifically the shonkiest movie since “The Core”, but it’s an awfully entertaining flick thanks to its over-the-top special effects. The disastrously rendered scenario for 2012 was that the Sun suddenly started spewing out “mutated” neutrinos (stop sniggering) that subsequently heated the Earth’s mantle, triggering bedlam.

At about the same time I noticed a little news item - that the year 2012 is actually forecast to be a particularly nasty year for solar flares and Coronal Mass Ejections (CMEs). CMEs have been in the news quite a bit lately, but attract little interest from most of us. No doubt they are closely followed by the inevitable doomsdayers warning,” the end is nigh”. So what’s the truth? Is there any risk?

Actually yes! While suggesting CMEs will return us to the Iron Age may be a bit of an overstatement, it turns out that big CME’s are indeed the bane of modern technology.

A CME is, in effect, a large storm in space. This storm comprises radiation and fast-moving, charged particles that can disrupt the earth’s magnetosphere, which is effectively our protective force field against nasties such as cosmic radiation (of which a colourful side effect is the Aurora or Northern/Southern Lights caused by radiation crashing into the upper atmosphere).

Some of you might recall that in 1989, a large chunk of Canada was plunged into darkness when a strong, but by no means the largest that Earth has experienced, CME struck. This instantly overloaded Canada’s power grid, burning out transformers all over the place. What is perhaps less well known is that back in 1859, a much larger a CME than Canada’s, seriously interfered with the newly invented telegraph, shorting it out and starting many fires. That CME was so powerful that the Northern Lights (usually only seen in Canada and northern USA) were visible as far south as Cuba.

Had the 1859 event occurred today, it would have found more than a basic telegraph network to wreak havoc upon.

To put this in perspective, most electronic equipment is (hopefully) designed to withstand a typical or average CME (based on the last 100 years or so). However, the 1859 event was much larger than anything we’ve experienced in the last 100 years. And there are geological records that suggest the 1859 event is not a one off. Equivalent or larger events have occurred at quite regular intervals in the past.

So if we are hit by a large CME, what should we expect other than a pretty light show in the sky over Brisbane?

Assuming the CME doesn’t wipe out GPS satellites entirely, the old joke about your GPS guiding your car into a lake is highly likely. For quite a few days the signals from satellites will be incorrect and positioning on your GPS will be very inaccurate. Of course, if the event is powerful enough, your car’s own GPS unit could be fried too, along with your car’s engine-computer and any other electronic gizmos on board. You probably wouldn’t be driving anywhere. So add to that list, cell phones - forget them, landlines - ditto, TV - probably fried as well.

A CME, if it’s large enough, can punch right through our magnetosphere and will fry any electronics on the ground, not just the satellites outside our atmosphere. A large CME can also seriously deplete the Ozone layer. In extreme cases the solar wind, which is no longer impeded by our magnet field (a tangled mess until it reforms) can strip away part of the atmosphere.

There are probably a few of you thinking this could be a good thing, back to the good old days reading a book. But I think you would be in for a shock. With no electricity you no longer have your washing machine, so boiling water over a wood fire and scrubbing your smalls by hand is something you’d quickly grow to loath. No microwave, no electric kettle, no fridge and no electric cookers - the list of life’s true essentials goes on. It would potentially take quite some time to get the power grid up again because no-one has large stocks of transformers sitting around for this scenario.

Interestingly, our sun is one of the most well behaved stars observed by astronomers. Other stars in the same class as our sun have been observer to produce CME events millions of times more powerful than our gentle star. CMEs of this magnitude would mean extinction events, forget cell phones, it’s us that would be fried.

Friday, April 8, 2011

Introducing Enex TestLab Security Testing Division Advertisement


Enex TestLab is moving forward on its marketing strategies producing this advertisement highlighting our independent security testing services