Rename

A nifty little default linux command that will rename all the files extensions in a directory.

Change all htm files to html.

rename .htm  .html *.htm

DSH – Dancer’s Shell a.k.a Distributed Shell

This tool is also great to have in the toolbox to remotely administer servers.

MySQL – Server is Gone error can be avoided by:

The easy way to avoid this problem is to ensure that max_allowed_packet is set bigger in
the mysqld server than in the client and that all clients uses the same value for
max_allowed_packet.

Funkload / Open Flash Charts / Django

In my recent hunt for anything python, I came across three tools that have been pretty helpful in some projects I’ve been working on. I’ve been spending a lot of time lately reading up on and learning about python. I’ve ran into a few tools that are pretty handy to have in my toolbelt for managing systems and building small projects.

Funkload – http://funkload.nuxeo.org/

I came across a utility called Funkload. I’ve checked out some other Automated test suites but I seem to be drawn to python based utilities. From what I’ve seen I really like the interface and the built in tools that come by default. Another huge plus about the software are the free reporting tools and test documentation that’s generated from XML file from the configuration files. This is truly a great tool. I’m still getting accustomed to it and I’m sure I’ll run into some limitations but for now it meets my needs.

Django – http://www.djangoproject.com

I’ve also been looking for a tool to build dynamic websites. The one I came across that really caught my eye was django. There’s plenty of documentation on the project and lots of tools for python. One of the more exceptional features is the built in admin interface. It really does make building websites so much simpler. I’ve used Django to build a volunteer database for a project I’m currently working on. http://sficg2008.com. The volunteer database will help organize upto 700 volunteers and manage the times specified and their current status within all background checks and paperwork checks. Anyways, the Django framework is easy to use and was a good way for me to get started with python. The django framework takes care of a lot of programming that I would otherwise need to do and it does so very elegantly. I haven’t ran into any limitations as of yet, but I am fairly new to python and developing in general. I hope one day I can rewrite another project I’ve been helping with in Python/Django. I think it’s an excellent tool and I’ve only scratched the surface. It’s even more invigorating that a company like meebo.com is using django now as well. 🙂 I’m sure support for this framework will grow. I’ve seen some other frameworks for different languages like, cake, symphony, etc. but I like the straight forward syntax of python. Not to mention it is a human readable language, something I’ve been wanting to learn for quite some time. Python also still keeps the ‘P’ in LAMP stack. heh.

Open Flash Charts – http://teethgrinder.co.uk/open-flash-chart/

Open Flash Charts is an open source Flash charting tool. Similar to Fusion Charts and XML/SWF Charts this is a simple way to add charting to anything. Although, Open Flash Charts is written in PHP, I was able to use the Python library and get everything working but it did take a little bit of help. The charts look good and the integration is fairly straight forward. I hope to use this for future projects due to the ease of use and cool functionality.

Numenta – http://www.numenta.com

I’ve been looking at this tool for awhile but realize that this is currently out of my league. An artificial intelligence engine that has the ability to learn and adapt based on what you feed it. Once I get my development up to speed, this is definitely an area I want to explore. I can see the usefulness of this tool in so many ways.

Five mistakes of log analysis

October 21, 2004 (Computerworld) As the IT market grows, organizations are deploying more security solutions to guard against the ever-widening threat landscape. All those devices are known to generate copious amounts of audit records and alerts, and many organizations are setting up repeatable log collection and analysis processes.
However, when planning and implementing log collection and analysis infrastructure, the organizations often discover that they aren’t realizing the full promise of such a system. This happens due to some common log-analysis mistakes.
This article covers the typical mistakes organizations make when analyzing audit logs and other security-related records produced by security infrastructure components.
No. 1: Not looking at the logs
Let’s start with an obvious but critical one. While collecting and storing logs is important, it’s only a means to an end — knowing what ‘s going on in your environment and responding to it. Thus, once technology is in place and logs are collected, there needs to be a process of ongoing monitoring and review that hooks into actions and possible escalation.
It’s worthwhile to note that some organizations take a half-step in the right direction: They review logs only after a major incident. This gives them the reactive benefit of log analysis but fails to realize the proactive one — knowing when bad stuff is about to happen.
Looking at logs proactively helps organizations better realize the value of their security infrastructures. For example, many complain that their network intrusion-detection systems (NIDS) don’t give them their money’s worth. A big reason for that is that such systems often produce false alarms, which leads to decreased reliability of their output and an inability to act on it. Comprehensive correlation of NIDS logs with other records such as firewalls logs and server audit trails as well as vulnerability and network service information about the target allow companies to “make NIDS perform” and gain new detection capabilities.

Some organizations also have to look at log files and audit tracks due to regulatory pressure.
No. 2: Storing logs for too short a time
This makes the security team think they have all the logs needed for monitoring and investigation (while saving money on storage hardware) and then leading to the horrible realization after the incident that all logs are gone due to its retention policy. The incident is often discovered a long time after the crime or abuse has been committed.
If cost is critical, the solution is to split the retention into two parts: short-term online storage and long-term off-line storage. For example, archiving old logs on tape allows for cost-effective off-line storage, while still enabling future analysis.
No. 3: Not normalizing logs
What do we mean by “normalization”? It means we can convert the logs into a universal format, containing all the details of the original message but also allowing us to compare and correlate different log data sources such as Unix and Windows logs. Across different application and security solutions, log format confusion reigns: some prefer Simple Network Management Protocol, others favor classic Unix syslog. Proprietary methods are also common.
Lack of a standard logging format leads to companies needing different expertise to analyze the logs. Not all skilled Unix administrators who understand syslog format will be able to make sense out of an obscure Windows event log record, and vice versa.
The situation is even worse with security systems, because people commonly have experience with a limited number of systems and thus will be lost in the log pile spewed out by a different device. As a result, a common format that can encompass all the possible messages from security-related devices is essential for analysis, correlation and, ultimately, for decision-making.
No. 4: Failing to prioritize log records
Assuming that logs are collected, stored for a sufficiently long time and normalized, what else lurks in the muddy sea of log analysis? The logs are there, but where do we start? Should we go for a high-level summary, look at most recent events or something else? The fourth error is not prioritizing log records. Some system analysts may get overwhelmed and give up after trying to chew a king-size chunk of log data without getting any real sense of priority.
Thus, effective prioritization starts from defining a strategy. Answering questions such as “What do we care about most?” “Has this attack succeeded?” and “Has this ever happened before?” helps to formulate it. Consider these questions to help you get started on a prioritization strategy that will ease the burden of gigabytes of log data, collected every day.
No. 5: Looking for only the bad stuff
Even the most advanced and security-conscious organizations can sometimes get tripped up by this pitfall. It’s sneaky and insidious and can severely reduce the value of a log-analysis project. It occurs when an organization is only looking at what it knows is bad.
Indeed, a vast majority of open-source tools and some commercial ones are set up to filter and look for bad log lines, attack signatures and critical events, among other things. For example, Swatch is a classic free log-analysis tool that’s powerful, but only at one thing — looking for defined bad things in log files.
However, to fully realize the value of log data, it needs to be taken to the next level — to log mining. In this step, you can discover things of interest in log files without having any preconceived notion of what you need to find. Some examples include compromised or infected systems, novel attacks, insider abuse and intellectual property theft.
It sounds obvious: How can we be sure we know of all the possible malicious behavior in advance? One option is to list all the known good things and then look for the rest. It sounds like a solution, but such a task is not only onerous, but also thankless. It’s usually even harder to list all the good things than it is to list all the bad things that might happen on a system or network. So many different events occur that weeding out attack traces just by listing all the possibilities is ineffective.
A more intelligent approach is needed. Some of the data mining (also called “knowledge discovery in databases”) and visualization methods actually work on log data with great success. They allow organizations to look for real anomalies in log data, beyond “known bad” and “not known good.”
Avoiding these mistakes will take your log-analysis program to the next level and enhance the value of your company’s security and logging infrastructures.