Cyber Adversary Characterization: Auditing the Hacker Mind: Ch 1 – Mitnick Exposed

Active Image
Active Image

Discuss in Forums {mos_smf_discuss:Book Reviews} 

Leading security, cybercrime and terrorist-intelligence experts with experience from the FBI, US Army, Department of Homeland Security and Department of Defense show you how to develop a well-measured defense against cybercriminals. In Chapter 1 – Introduction, the infamous hacker Kevin D. Mitnick allows himself and one of his more significant hacks to be weighed and measured.

This chapter is excerpted from the book titled "Cyber Adversary Characterization: Auditing the Hacker Mind" By Tom Parker, Eric Shaw, Ed Stroz, Matthew G. Devost, Marcus H. Sachs, published by Syngress. ISBN: 1931836116; Published: 2005; Pages: 324; Edition: 1st.

Chapter 1 – Introduction

Topics in this Chapter:

· Introducing Adversary Characterization

· Cyber Terrorist: A Media buzzword?

· Failures of Existing Models

· An Introduction to Characterization Theory

Cyber Adversary Characterization

When you picked this book from the shelf, chances are you did it for one of two reasons: from mere curiosity about the subject matter or because you felt that it will give you a better understanding of whom you are protecting your assets against and how you can do a better job at that task. Systems administrators and other IT professionals often find ourselves looking for a better understanding of who it is that we are protecting our assets against; this often creates a feeling of insecurity or vulnerability—the “not knowing” factor. The “not knowing” feeling can be introduced into the equation at multiple levels, and not always directly related to the administration of computer networks. Perhaps you’re a member of your firm’s human resources department, unsure whether the young systems administrator you just hired may one day turn on the firm, causing damage to company assets on a massive scale. And whose fault would it be if that were to happen? So perhaps you should not take the risk and just find another candidate. Does his young age and lack of experience on a large corporate network make it more likely that he constitutes an insider threat to your organization? That perhaps one day he will turn against the company, giving systems access to his so-called friends on an Internet Relay Chat channel he frequents, because he is upset over a salary dispute? Or is he likely to leak sensitive company intellectual property to a competitor when offered a bribe? Perhaps you are that systems administrator, concerned that the systems it is now your task to protect are at risk, but you aren’t sure from whom or what. What does your adversary look like? What kind of attacks will he or she use in trying to compromise the network? Indeed, what is it that’s motivating your adversary? You are also concerned that a mission-critical application has not been designed in a secure manner; what factors should the development team consider when designing attack countermeasures?

These examples make up a minute percentage of the questions employees of organizations large and small are asking themselves on a daily basis—but with what authority are they answering them? What courses have they studied that enable them to accurately identify a threat to their organization and mitigate in an effective manner? The truth is, in the public sector, there is little data available to average employees to enable them to answer these questions. Government organizations and law enforcement are a little better off, given the threat-modeling systems many of them use on a daily basis.

There is a clear need for a better understanding of the cyber adversary of today and tomorrow, from what it is that motivates an adversary to the threat that said adversary poses to your organization’s assets. Of course, with hindsight it is easy to make sweeping statements, such as a greater awareness of computer security-related issues within your organization would have mitigated the repercussions of many of recent history’s computer security-related incidents, or perhaps even prevented the incident in the first place.

But as you’ll know if you’re a systems administrator, persuading management that a threat exists, attempting to identify the nature of that threat, and expressing it in a way that even a CEO will understand, especially when it involves budgetary considerations, is not so easy. Even in the case where an incident has occurred, how do we learn from the incident? Sure, you can run around patching systems that will probably be vulnerable again in a few months anyway, but what can we learn from the adversary who has, in spite of what we admit in public, outmaneuvered you?

It is clear that we need a better understanding of an adversary’s core properties and a set of proven threat characterization metrics to measure these properties and determine how any given adversary would behave in a defined situation—or more important, against a specific asset. Throughout this book, you will find various characterization metrics and theories, with each chapter designed to focus on the differing applications of characterization theory. We characterize the threat from adversaries inside your organization to the threat your company may be exposed to from so-called high-end cyber adversaries, such as members of terrorist organizations and well-funded rogue states. The following pages document several case studies, either based on real events containing partially fictitious information or accounts of actual incidents. Although these case studies do not alone scope out the full extent of the characterization problem, they set the scene nicely for what’s to come. The first case study is the infamous Kevin Mitnick’s first-person account of an attack against a small technology company based in the San Fernando Valley. The story was taken from Kevin during an interview with the author and details his 1987/1988 attempt to gain unlawful entry to Digital Equipment Corporation materials. The story exemplifies one of the many motivations of cyber adversaries—the retrieval of additional capabilities, in this case, source code. In the concluding chapter, we will use the characterization theory we cover in the intervening chapters to examine Kevin’s attack and the ways it could have been prevented through a better understanding of the cyber adversary.

Case Study 1: A First-Person

Account from Kevin D. Mitnick

“Over a decade ago, I had compromised a number of systems owned by Digital Equipment Corp. [DEC], located on the corporation’s wide area network named Easynet,” Kevin Mitnick recalls. “My ultimate goal was to gain access to the systems within DEC’s engineering department in order to retrieve the source code for VMS—DEC’s flagship operating system product. The aim of getting the source code for VMS and other operating systems was so that I could analyze the extremely well-commented [documented] code, written by DEC developers, to determine where security-related modifications had been made. DEC engineers would often document the details of a fixed vulnerability next to the previously vulnerable code segment. A generally unknown fact, my ultimate objective goal as a hacker was to become the best at circumventing security systems, and overcome any technical obstacles that would get in my way; whatever the objective, I possessed enough persistence to always succeed.”

“I Put My Freedom on the Line for Sheer Entertainment …”

“Although I had already acquired access to the DEC Easynet network, none of the systems to which I had access resided on the VMS development cluster. One information-gathering method was to install network sniffers on the systems I had previously compromised in hopes that I could intercept interesting information, like user authentication credentials. My goal was to eventually gain access to the VMS development cluster—complete with development tools and the latest release of operating system source code. Unfortunately, back in those days, many operating system vendors had yet to standardize the use of TCP/IP as the network transport protocol of choice. Most, if not all, of the systems on Easynet primarily used the DECNET/E protocol. I installed sniffers on certain compromised nodes (systems) which allowed me to gain access to additional computing resources. The targeted resources were other nodes on the network with a sufficient amount of unused disk storage, and any system which had direct connectivity to the Internet. The source code files were so large, even when compressed, that it would have taken months to download over dial-up. I needed a way to transfer the code outside DEC so I could analyze it without the fear of being detected.

And so, I began to research the possibility of writing or acquiring a sniffer that worked with the DECNET/E protocol. After a few of hours researching, a few names of vendors came up. These vendors sold expensive products that would have been useful in my endeavor to intercept traffic. Sometime later, I stumbled across a network diagnostics program designed to analyze and monitor DECNET/E protocols, written by a company in the San Fernando Valley named Polar Systems. A feature of the network diagnostics suite was the ability to collect and display packets collected from a DECNET interface. The tool was just what I needed—I just had to figure out how I was going to borrow it. My initial attempts to retrieve the software from Polar Systems consisted of using my knowledge of the telephone system to identify which phone numbers also terminated at the likely address where the product was developed, sold, or supported. After every telephone number terminating at the Polar Systems address, I proceeded to identify which of the lines were data, fax and voice. It turned out that Polar Systems was actually run out of someone’s residence which made my reconnaissance much easier. I identified two numbers that answered with modem breath. I dialed into both, discovering the all-too-familiar beep, indicating the box was waiting for me to enter the system password. A security feature allowed the operator to require a password before the system would prompt for a username and password. The telltale sign was a distinctive beep after hitting the return key on my VT100 terminal. I guessed that Polar Systems used these numbers to remotely dial into their system—perhaps if I could get access through their dial-in mechanism, I could access their development system, complete with sniffer software, and if I got lucky, source code! I promptly disconnected from my dial-in session, as I did not want to raise suspicions if they happened to be watching the lights blink on the dial-up modem. After all, the business was run out of someone’s home. After much thought, I decided that the easiest way in was going to be through a blended attack using both social engineering and technical expertise. I remembered that DEC was under intense pressure to release security patches for some newly discovered vulnerabilities that were recently publicized. Accordingly, DEC set up a special toll-free number so anyone could call in and request the latest security patch kit on magnetic or cartridge tape. As luck would have it, the telephone operator at the toll-free number did not bother verifying whether the customer was a legitimate customer. This meant that pretty much anyone with a telephone line and the guile to call DEC could get themselves a free tape critical security patch kit for the cost of calling a toll-free number—absolutely free.

I placed several telephone requests for patch kits to be delivered to several addresses in the Los Angeles area. After receiving the patch kits, I proceeded to carefully remove the tape and written materials, wearing a pair of latex gloves to ensure that my fingerprints would not be left on the tapes. I knew they would eventually be in the possession of my target, and possibly thereafter, law enforcement. After extracting the files from the special VMS formatted back-up (saveset), I decided the best way to meet my objective was to backdoor the patch kit with some extra code that would covertly modify the VMS login program, which was responsible for authenticating users at the operating system level, which stood between me and Polar Systems IPR.

After a number of hours of analysis I identified a segment of the binary which could be used to inject my own instructions—in this case several jump instructions to unused areas within the image of the login program, which would include several “special” features that would give me full control of the system once installed. To aid my work, I acquired a similar patch written by the Chaos Computer Club (CCC) which did essentially the same thing on an earlier version of VMS. After a few days researching, programming and testing, I decided that the patch was ready to be incorporated into the security patch kit. I rolled up my patch with all the other legitimate files into a new VMS formatted backup; I wrote it to tape, and carefully repackaged the box just like it arrived from DEC. I even went to the trouble of shrink-wrapping the cartridge tape with the packing slip to give it that extra dose of authenticity.

Figure 1.1 An Assembler Dump of the Target VAX Binary

I carefully repackaged the newly shrink wrapped tape into the DEC-labeled box—the one I had originally received it in—taking care to ensure that no fingerprints, skin cells or hair was deposited on the tape or into the box. My next step was figuring out the best way to get my target to install the update from my “special” tape. I thought about mailing it from Los Angeles, but that may have raised a red flag—the real tape was mailed from Massachusetts. I had to think of a better way.

Once the target installed the “security” update on their systems, I would be able to sneak in over their dial-in and retrieve the programs I needed to assist my further penetration of DEC’s Easynet.

All was going to plan—I opted to become a UPS delivery man for a day and hand-deliver the package to the residence where Polar Systems ran its operations. After purchasing a UPS delivery outfit from a costume shop (Hollywood is a great place to buy costumes), I made an early morning visit to the address for Polar Systems. I was greeted at the door by some guy who looked like he needed a couple more hours of sleep. I hurriedly asked the gentleman to sign for the package as I complained about being late for another delivery. The gentleman cooperatively signed for the package and took it into the house, closing the door behind him.”

You may be wondering why I distracted him by acting in a hurry. Well, although I did not want to raise suspicion by coming across in an unnatural manner, I was lacking one vital object, possessed by all UPS delivery folks—a UPS truck. Luckily, the inert gentleman did not notice anything out of the ordinary.” The following day, I dialed into Polar Systems’ modems, entering the secret phrase required to activate my backdoor. To my disappointment, the attempt failed—I figured that they must have not installed the security patch yet. After some 10 days, Polar Systems finally installed the critical update, allowing me to bypass the authentication on the dial-up line, and yielding access to both the source tree and binary distribution of the Polar Systems DECNET monitoring tool.”

Case Study 2: Insider Lessons Learned

In May 1999, Kazkommerts Securities, a small company based in Almaty, Kazakhstan, entered into a contract with Bloomberg L.P. for the provision of database services to the firm. Shortly afterward, an employee at Kazkommerts named Oleg Zezov (purportedly Kazkommerts’ chief information technology officer) discovered that he could use his newly acquired access resulting from the acquisition of Bloomberg services to escalate his privileges on Bloomberg’s network to exploit software flaws and steal various user login credentials, including those of Michael Bloomberg, the founder and then head of Bloomberg L.P. After accessing the accounts of various Bloomberg employees and retrieving data from those accounts, including Michael Bloomberg’s credit card details, Zezov sent a threatening e-mail to Michael Bloomberg, demanding a substantial amount of money. In return, Zezov offered to disclose the ways he had compromised the Bloomberg computer systems and retrieved the necessary authentication credentials to compromise the account and data of the company head. After realizing the nature of the compromise, Bloomberg quickly remedied the software flaw that allowed Zezov access to the network and worked with the FBI to apprehend Zezov and his counterpart in London, where they had agreed to “resolve” the issue.

Although to this day, the details of the software flaw and the computer systems surrounding the break-in remain unclear (at least in the public domain), it is clear that an accurate assessment had not been made regarding the threat posed to the various technological assets at Bloomberg L.P., especially when it came to the insider threat. Although Zezov was not an employee of Bloomberg, in some ways he can be considered an insider, given that his attacks against Bloomberg were made possible through authorized access he had to Bloomberg’s database services as a customer. This case study is further examined in Chapter 7.

Cyber Terrorist: A Media Buzzword?

The term cyber terrorist falls under the same media buzzword umbrella as black hat and even the overused and abused hacker. The idea that a so-called cyber terrorist can compromise the security of a computer system and cause actual bodily harm as a direct result of the system compromise, even in today’s world, is somewhat far-fetched, where many compromises have only resulted in defacements of sites or temporarily downed servers. But, even these defacements of government websites are more common than many people realize. Figure 1.2 displays the defacement of the official Whitehouse website, by the notorious group “Global Hell”. It has, however, become more probable that a terrorist group could seek the skills of a hacker to augment a more conventional act of terrorism. The following account is loosely based on such an event where a teenage male was approached by an individual, known to be associated with an eastern terrorist group.

Figure 1.2 Whitehouse Website Defacement by Global Hell

In June 1999, an Alaskan hacker named Ryola (aka “ne0h”)was chatting on his favorite Internet Relay Chat channel as he did every other night, bragging about his latest hacked systems (see Figure 1.3) and comparing the speeds of their connection speeds with those of the other hackers in the channel. After deciding to call it a night, he went to check his e-mail one last time and noticed a message from an individual claiming to be from a group of eastern “freedom fighters” who had been given Ryola’s contact details by an unidentified friend. In the e-mail, the individual, who identified himself as Kahn, detailed a “project” he was engaged in that required the schematics of three specific models of aircraft. The offer being put forward to Ryola consisted of a one-time payment of $5,000 in return for the schematics of the aircraft models listed in the e-mail message.

Figure 1.3 One of Many Web Defacements Carried Out by the Notorious ne0h

At the time, Ryola had a job at a local computer vendor, fixing and building home and business computer systems, but he needed the additional money to fund the trip to Las Vegas he had planned for the following month. After a short telephone call between Ryola and Kahn, made to a local call box that Ryola used to protect his identity, the details of the task at hand were confirmed. Several days later, after he had completed his initial network scans and determined the most likely place to find the schematics he had been asked for, Ryola made his move and compromised multiple systems on one of the aircraft designer’s many networks. He then used this access to leverage further attacks against internal Windows networks, which the designer’s engineers used to store schematics and other sensitive documents. Although it is arguable that Ryola “got lucky,” he did complete the requested task and within days, the schematics of three of the requested aircraft types were in Kahn’s hands. Months later, an aircraft matching the type documented by the schematics Ryola stole was hijacked over Saudi Arabia by the same group Kahn had identified himself as representing to Ryola.

A year went by and Ryola remained unpaid for the task he had undertaken for Kahn, who Ryola now knew was a terrorist. In spite of several failed attempts to contact Kahn and request the money he thought he had earned, he remained unpaid. It wasn’t until February 2001 that Ryola heard once more from Kahn. In an e-mail from a new address, Kahn apologized for having failed to pay Ryola, claiming he had been in hiding as a result of investigations made to find the perpetrators of the previous year’s hijackings. To make up for this, Kahn promised Ryola more than five times the sum previously promised to him, in return for the retrieval of schematics of four more aircraft types. By this time, Ryola had found a better job and was left with a bitter taste in his mouth from his previous dealings with Kahn, so he gracefully declined the offer. No more e-mail communications occurred between the two, and for Ryola, his dealings with the individual were over. Several months later that same year, Ryola had been up until 5:00 in the morning chatting to some of his online friends about their plans to compromise the security of a South African-based ISP. The next day, he awoke at about 2:00 in the afternoon and turned on the television. The day was Tuesday, September 11, 2001, and before his eyes were reports of four planes being used for acts of terrorism, purportedly by a sister organization of the “freedom fighters” he had previously worked for, using planes matching the descriptions in the e-mail from Kahn some seven months previously.

It should be noted that although real names, group names and other details have been removed from the account; the story is based upon real life occurrences which are very real.

Although the connections between the portions of this fictional story that are based on real events and the tragic events of September 11 are somewhat unclear, the hijacking in 1999 was very real, as was the evidence linking the compromised airplane schematics and the group that carried out the hijacking. When we think about adversary characterization, it is important that we keep the bigger picture in view.

The second that we become narrow-minded about the security of our organizations and the resolve of our enemy is the second that we become vulnerable.

In this case, the compromise of data on a poorly protected computer network didn’t by itself create the hijacking situation—only the act of the hijacking’s perpetrators stepping onto the aircraft and taking it into their control did that.

However, the schematic data would have almost certainly aided them in planning the execution phase of the hijacking, increasing their chances of success and reducing their chances of their plot being foiled during execution. When performing a characterization, especially when it involves characterizing the threats to assets within an organization and trying to establish which information would be of most value to an adversary, it is vital that we remember things like the system that holds airplane schematics. Sure, it isn’t a database server holding thousands of credit card numbers or authentication credentials. But different assets have different values to different adversaries. The key lies in knowing which adversaries value which assets and how those adversaries are most likely to go about compromising those assets. The hijacking is an excellent example of data that was of high value to an adversary that at the end of the day compromised a very highvalue asset, aided by compromising data that was clearly characterized as being of low value, at low risk, and therefore poorly protected.

Failures of Existing Models

Cyber adversary characterization as a whole is a rather large topic, and it would create an unworkable situation if we were to attempt to solve the characterization problem by creating one large metric designed to take all possible data into account. Past attempts at making sense of the vast amounts of adversary-related data have in one way or another failed. Perhaps the primary reason for this failure is that there is simply so much potentially meaningful data to take into account and so many perspectives that it is extremely easy to lose focus of the data that matters and, indeed, what you are actually trying to achieve. In past characterization workshops and research groups, attended by most contributing authors of this book, we have often asked the seemingly obvious question, What are we all doing here? What is the common goal that brings us together, and how can what one of us comes up with, which might on the surface apply to only his individual work practices, help another? The problem here is that this way of thinking is far too high level; a lower level of thought is required to answer these questions.

The answers to these questions should become apparent as you read this book, and they are fairly straightforward. At the end of the day, no matter what your use for characterization data—whether to achieve greater levels of security during the design of software, for more accurate network threat assessments, for improved incident analysis, or to detect an insider threat—the group we are trying to characterize remains the same, and the properties of this group remain the same, and therefore, so does the set of characterization metrics required to assess said adversarial/group properties. To summarize, a set of metrics to assess the individual properties of the adversary and a methodology to assess meaningful relationships between these metrics would aid the characterization in almost every circumstance.

Some of the problems encountered in attempting to deal with adversarial data in other manners are discussed in the following sections.

High Data Quantities

Vast amounts of adversary data need to be, or at least could be, considered when attempting to complete a characterization. Often much of this data in its raw form is of little use to an individual attempting to perform a characterization and does little more than present itself as “white noise” around the data that actually means something. Due to often large data volumes, data must be categorized and attributed to adversary properties for us to stand a chance of understanding it, and if we were to attempt any kind of detailed characterization, high data volumes would make this impossible without such a methodology.

Data Relevancy Issues

Certain types of characterization data are often only of relevance in certain cases. Indeed, the data that is available regarding a subject will also differ from case to case. For this reason, use of a single metric and lack of a methodology for breaking down an adversary’s individual properties would probably result in a certain amount of the subject data being of no relevance to your circumstance, potentially skewing your final result.

For example, during the characterization of a theoretical adversary type for a threat assessment, no forensic data from an actual attack may be available since no incident has occurred. Since a single characterization metric would have to take forensic incident data into account, this would leave a black hole in the middle of your metric because a dependency of the metric has not been met. Furthermore, it would make it a tricky task to predict the kind of forensic data that may be available.

As we already stated, much of the data that needs to be considered for an accurate characterization is often of little use in its raw form—in other words, without any supporting research or data evaluation metric to give it some meaning. For the purposes of this book, we refer to this type of data as analog data. An example of such analog data is an attempt to profile an individual through the operating system she uses. Although this information can be of use, on its own, without the presence of a data evaluation or additional research data, it is of little use. Because we are dealing with the profiling of real people here and not artificial neural networks, all data associated with properties of an adversary tends to be analog. “Digital data” types are almost always encountered by processing analog data via a data evaluation metric of some kind. For example, we hypothesize that through the evaluation of the tools an attacker uses in his or her attempt to compromise a system (an analog data type) via a characterization metric, we are left with an integer type “score” that may be representative of the threat that individual poses to a defined asset—and indeed, their skill level. Whether this assessment is true or not, the point here is that any two “digital” results should be directly comparable to one another; where s the analog data that we started with is not. This strengthens the case for the use of multiple metrics to assess the various properties of the cyber adversary rather than trying to address the problem as a whole, which we outline in greater detail in the following chapters.

Characterization Types

Typically, characterizations of cyber adversaries fall into one of two categories: theoretical and post-incident (actual or forensic) characterization types. The primary purpose for making this distinction is that parameters the metrics use to assess the adversarial properties will differ substantially, as will the uses for the final characterization. To qualify whether a characterization is going to use theoretical or characterization methodologies, we must assess the nature of the situation and the information that is available to us to conduct the characterization. Although metrics used to assess no observable data and attribute to adversarial properties during a theoretical characterization and the observable data to attribute to adversarial properties during the characterization of an actual adversary will differ, many metrics used during an actual characterization are also commonplace during a theoretical assessment, but not vice versa.

This is due to almost all metrics that are used for theoretical characterization also being of use in the characterization of actual adversaries, but not the other way around, since the additional metrics used in forensic characterizations rely on data that will not always be available during a theoretical characterization, since no actual incident has occurred. Although past attack data is of use for building profiles of adversary types, during a theoretical characterization we can only speculate that the subject will tender similar behavior to that displayed in a past attack. To this end, past attack data is of most use for improving the metrics used during theoretical characterizations rather than for making sweeping assumptions that the behaviors of a past, purportedly similar adversary bearing similarity to a theoretical one. Remember, no two adversaries will behave exactly the same way.

Theoretical Characterization

Theoretical characterization theory is possibly of most use for performing and improving on the accuracy of asset threat characterizations, for improving the designs of network topologies and data systems, and for conceiving efficient methodologies to test said systems’ resilience against the known and unknown.

Introduction to Theory

Many members of groups developing trusted computer systems have expressed that it would be an extremely useful thing to be in a position whereby a developer is able to assess the profiles of a handful of real hackers, identified as possibly posing a threat to whatever it may be that’s being designed. This way they could assess how each adversary would go about attacking their platform, using characterization theory to explain how each adversary behaved in the test and why they behaved as they did.

Although such an approach may very well uncover several previously unseen problems in the design or implementation of the solution, it is a somewhat ad hoc methodology, since the hand-picked individuals would, without careful assessment prior to the exercise, possess several unknown properties. Furthermore, the process of using “real” hackers may itself taint any value gained due to the individuals who agree to take part in the tests sharing common properties. In this scenario, the preferential methodology would place a developer in a situation where he or she may “fuzz” adversaries by changing the properties (the variables) of said adversary and, using the same techniques used to enumerate each adversary, assessing how the resulting characterized adversary would behave in a given situation. This methodology has the advantages of being highly controlled and scalable, and in a testing environment, tests to test the accuracy of the methodology itself would be trivial to orchestrate with the use of adversary case studies. More information on this topic is presented in Chapters 2 and 3.

This is one of the many uses for theoretical characterization theory. This example is fairly specific to system design; the second most common use for theoretical characterization has been in the characterization a specific threat poses to a given asset. Although the theory used for asset threat characterization remains the same, the fact that you are now dealing with assets (computer systems or other) that have already been designed changes the process you must go through considerably.

Post-Incident Characterization

During the course of most days on today’s increasingly hostile Internet, systems administrators, network administrators, and personnel working in the network operations centers of managed security service providers are faced with what would be, if printed out, hundreds of reams of reports detailing information pertaining to purported incidents on their or their clients’ networks. Back in the bad old days of intrusion detection, having a system that would report events on host and network-based intrusion devices was considered sufficient. Of course, it wasn’t long before most large organizations realized that the $20,000 IDS they just invested in was generating more event logs per second than there were employees in their organization to review each log entry. These organizations realized they could either action a measured response, turn a blind eye to the event, or (possibly most significantly) mark the event as a false positive. With this came the age of the managed security service (MSS) provider who bought IDS event correlation and false-positive detection technology into the commercial marketplace. For most organizations the introduction of an MSS provider solved the problem of having spent substantial amounts of their budgets on host and network-based intrusion detection devices that they couldn’t afford to manage, but one perhaps unseen problem remained.

For a moment, place yourself in the shoes of a systems administrator for a large organization that has been through the process of purchasing network based intrusion detection devices, coupled with the acquisition of managed services from an MSS provider. You log onto the MSS portal site and view the tickets that have been raised for your network segments as a result of multiple, correlated IDS events. The first ticket you spot informs you that someone coming from an IP address located in China has been scanning several specific port ranges on one of your development networks. A note on the ticket says the event has reoccurred several times over the last three days. Below the summary of information are several pages of technical spiel regarding the determined source operating system of the packets and the most likely tool that was used to perform the scan—but you ignore this, since the port ranges scanned are of no significance to you and you have 90 other IDS tickets to go through before your operations meeting in an hour.

Two days later, a new-hire systems administrator announces over the company intranet message board that a Web server on your production network has been compromised through stolen login credentials. It is now your job to lead the investigation into what happened. Aside from your suspicion that an insider was involved in the compromise of the host, which served the company’s primary cooperate Web site, you really aren’t sure in which direction you should take your investigation. Although the server hosts a Web site and it is currently receiving more than one 100,000 hits a day, the site content was not defaced in any way; there are no logs of what the individual did while on the system, and all you have is IDS data and a potential insider with no clue of what your adversary “looks like” or his motivations for the attack.

An Introduction to Characterization Theory

When we talk about post-incident or forensic adversary characterization, we are referring to a situation where an incident of undetermined type has occurred, presenting some form of data with which you will base the data you may have after an incident, similar to our short example. There are several primary objectives of this form of characterization. Each objective hopefully provides leverage to justify a measured reaction to an incident, whether that reaction is to leverage to action change in the design of a production network to a more secure model, to come up with an accurate profile of the adversary to aid in his or her capture, or most important, to glean a better understanding of the kinds of people who really want to break into your network, their motivations, and the kinds of attacks you are likely to see coming from said characterized subset of adversaries.

Because an actual event has occurred, the starting point at which the characterization begins changes from the typical starting point of a theoretical characterization to the data (IDS or other) pertaining to an incident. To this end, one of the applications of theoretical adversary characterization that has attracted substantial interest and raised many questions in the past is the possibility of a technology that can automate the characterization of adversaries from IDS data alone, providing a real time “score” of the adversary responsible for triggering an IDS. Although its important to remember that such an automated mechanism could never be as accurate as doing things by hand due to the limited data IDS has access to and limitations drawn from the IDS drawing its conclusions based on hard and fast rules, therefore allowing an attacker to either trick or bypass those rules, such a technology is very possible. Metrics such as those that examine the semantics of an attack could be used to draw conclusions about an adversary from data such as the operating system the attacker is using, the exploit they are using, the operating system of the target, and the difficulty of the hack. The following chapters address some of the topics (and problems) introduced in this chapter. Chapter 2 examines much of the characterization theory alluded to in this chapter, including that which can be used for both theoretical (asset type) characterizations and that we can use in the unfortunate times when incidents occur, giving us a framework through which we can seek attribution.

Copyright ©2021 Caendra, Inc.

Contact Us

Thoughts, suggestions, issues? Send us an email, and we'll get back to you.


Sign in with Caendra

Forgot password?Sign up

Forgot your details?