When you search the internet for analysis of the Storm Worm, you'll find a lot of them. I don't want to re-invent the wheel, but I just like doing (dynamic) analysis.
A few days ago I was collecting some variants of the Storm-worm (aka Peed, Nuwar, Zhelatin, Peacom) and did a Virus-Identification with my current AV (F-Secure). One of the samples was detected as Email-Worm.Win32.Zhelatin.pt and I was curious about a description on their website; but it wasn't there! Google, and other search engines couldn't help me, and also a search for the checksum didn't give me more information. So I decided to do an analysis by myself. And it was very interesting to see the activity of this worm.
After executing the sample, internet traffic increased heavily. My firewall (Sygate) alerted every second (!) with a new warning.
Here a summary of the analysis, if you're interested in the whole analysis contact me:
ChatoFlores [AT] my.security.nl.
I got this new idea how to identify and neutralise malware that do process injections into other processes.
It is a fairly simple thing, i have lots of code that could be used for it and i know how to write it, the problem is that i have no idea how to test it, mostly because i do not have access to a malware sample that do a process injection.
The program is basically an extention to one of my older programs (Procwall) but it would audit the process for certain properties and track these properties when it is running.
The philosophy & the truth why no security product is perfect .
I used to criticize products a lot when they fail. I used harsh words & complain a lot & to the point of bashing them until I realized my mistake.
I look at myself & saw that I am also not perfect & realized the truth & realized that I may have judged them too harsly & regret it so I want to share it with all of you.
I now learn to respect all security vendors & believe that no product is perfect. All of these actually are good.
1)Firewall with HIPS
4)Anti mallware tools
About a year ago, I read something about VM-detecting malware.
After studying this subject, there were a few thoughts which came to mind.
First thought was "How to analyze the behaviour/payload of VM-detecting malware without the use of a Virtual Computer, without sacrificing your computer, without the need to re-install your OS after the analysis (=infection)?
According to a SANS-Article this can be addressed either by patching the malware so it doesn't look for signs of VM environments, or by making changes to the VM environment that will trick the malware.
Recently there was a discussion on this site about Anti-Virus software testing. I even challenged AV companies to provide input on AV testing (no responses yet :) After some comments about the difficulty of testing and the need for rigorous scientific methods, Frank Boldwin posed the interesting question "What are the next steps?".
One could conjecture that the next steps are to come up with a set of criteria / requirements for useful, accurate testing. Then design a framework to do this testing and see if the idea really has merit. Finally perform the testing, analyze the results, make some sort of report.
However I'm not entirely convinced that AV testing has any benefit. Those of us that work in this realm understand that AV detection rates are very poor and probably can't be improved much using current techniques. Because of the ease of methods to evade AV, its unlikely any AV is going to stand out as exceptionally good over the others in any statistically meaningful way. (I've done some testing that bears this out)
I can easily take a piece of malware that is detectable by all major AV tools and make it undetectable by them (for a while) in a matter of a few minutes work.
So what do we get out of testing AV? Help in purchasing decisions? Do we really need it? Maybe most people realize just intuitively that ESET's nod32 is better than Symantec. Or that F-secure is better than Mcafee. But really, what does this matter? Companies will chose an AV based on a complex set of criteria including price, features, enterprise deployment abilities, etc. Individuals will buy based on what is most well known or what they get in a bundle.
Will seeing their name low down on the list make an AV company try to improve its product? I would venture to say most AV companies are already trying to improve their products as best they can. So what do change do we affect by publishing these results ?
I wonder if finding new ways to communicate threats, catalog and analyze samples, and even detect the maliciousness of a file more accurate are better places to spend our efforts. Or maybe there is a set of tests we can perform which can tell us more about the nature of malware and why its so hard to detect and the flaws in AV which lead it to be so unsuccessful.
Its very attractive to try to say which AV is better just like arguing about which OS or programming language is the best. However I'm not convinced its really productive.
So I propose we discuss what it is we want to know about malware and AV and then come up with tests that can provide us the answers. What do you think?
Another good analysis....have a look at :
"Since early 2007 a new form of malware has made its presence known on the Internet by its prolific growth rate, its ability to distribute large volumes of spam, and its ability to avoid detection and eradication. Storm Worm (or W32.Peacomm, Nuwar, Tibs, Zhelatin), as it is known, is a highly prolific new generation of malware that has gained a significant foothold in unsuspecting Microsoft Windows computers across the Internet. Storm, like all bots, distinguishes itself from other forms of malware (viruses, Trojan horses, worms) by its ability to establish a control channel that allows its infected clients to operate as a coordinated collective, or botnet. However, even among botnets Storm has further distinguished itself by being among the first to introduce a fully P2P control channel, to utilize fast-flux to hide its binary distribution points, and to aggressively defend itself from those who would seek to reverse engineer its logic."
No, i'm not dead. Just too busy in the last weeks. But today i have a new paper for you. It's an analysis of the malware Peacomm.C aka StormWorm. It mainly focuses on extracting the native Peacomm.C code from the original crypted/packed code and all things that happens on this way, like: XOR + TEA decryption, TIBS unpacking, defeating Anti-Debugging code, files dropping, driver-code infection, VM-detection tricks and all the nasty things the rootkit-driver does.
A new volume of Uninformed has been released today. Lots of great papers you should definitely check out.
- Real-time Steganography with RTP by I)ruid
- PatchGuard Reloaded: A Brief Analysis of PatchGuard Version 3 by Skywing
- Getting out of Jail: Escaping Internet Explorer Protected Mode by Skywing
- OS X Kernel-mode Exploitation in a Weekend by David Maynor
- A Catalog of Windows Local Kernel-mode Backdoors by skape & Skywing
- Generalizing Data Flow Information by skape
Have you ever wondered about the people behind Offensive Computing? Wish you could have seen one of our talks at Blackhat or Defcon, but wasn't able to attend?
Poking around on google I found a bunch of links that have videos of various Offensive Computing related talks. If anyone finds a link to the Blackhat Video of HD Moore and myself, please post in a comment. That talk had way more content than the Defcon version. Enjoy!
H D Moore and I's Blackhat/Defcon talk materials including slides, paper and videos are up and available on the Metasploit Site
Enjoy, and if you have any questions or ideas, let one of us know.