Archive for the ‘Risk’ Category


A safe that dispenses cash on command.

May 30, 2014

It seems like the sophisticated ATM attacks that occur around the world (the ones not using skimmers and cameras) happen because the attackers manage to acquire or gain access to a representative ATM of the type they want to attack.

eBay has plenty of used Hyosung and Triton ATMs for sale.  Picking one up is a great investment for an attacker who wants to understand the software and hardware weaknesses of the devices.  Heck, starting up a tunrkey ATM business is probably an even better investment, since you’ll get access to the latest and greatest to experiment on.

It reminds me of the gambling industry, and how their slot machine security depended partially on “crossroaders” not being able to get a look at the internals.  This turned out as well as you’d expect.  One former casino thief named John Soares wrote an entertaining book going into some depth on the subject.

Many like to call this security by obscurity and leave it at that, but a little analysis shows the underlying risk decisions.

If you go back 40-50 years to mechanical slot machines as described in Loaded Dice, you can reconstruct the “threat profile” casinos likely constructed around people who can successfully attack slot machines:

  • They need to be technically adept to understand the inner workings of slot machines, and how to ensure payouts
  • They need to be dexterous and accurate to effect the attack in a reasonable amount of time
  • They need to gain unrestricted access to a similar machine for practice

That limits the pool of attackers.  To defend against these threat actors, some countermeasures were put in place:

  • Supply chain regulation, which makes direct acquisition of the equipment riskier and more susceptible to future investigation
  • Common casino surveillance practices extended to the slots floor that limit what actions can be taken by an attacker in the scope of time, noise, and visual detectability
  • A mix of high-payout and low-payout machines with a corresponding traffic flow that makes the desirable target machines have a correspondingly smaller window of opportunity for the attacker.

With less time, and less freedom of action, an attacker must be *extremely* dexterous and accurate to have a reasonable chance of success.  When it is hard to buy or acquire target slot machines, tracking down the perpetrators after the crime occurs can be easier.  This in turn limits the pool of threat actors.  The attackers need to be *very* good, *very* fast, and *very* careful about how they acquire their knowledge to put the risk equation in their favor.

In Loaded Dice, John Soares’ crew is very good.  The fact that casinos they hit didn’t go bust because of them and other crews manipulating machines implies that the bar was raised high enough to limit just how many people could successfully attack desirable slot machines.




I don’t share your greed…

October 16, 2012

…the only card I need is:

Here is an interesting concern for people with proximity card access control systems. Does your brand of reader have a default “backdoor” card ID that is considered valid?

You really should tear apart and check your card readers.


The customer comes first at Oracle.

December 15, 2011

Even Oracle can screw up a PeopleSoft implementation.  If there was just a little more competition, say from a Tomorrow Now or a Rimini Street, would Oracle step up their game?


So when did RSA start shipping *your* replacement tokens?

June 7, 2011


It appears it was a seed compromise after all.  You need new tokens, and you need them months ago.  Cleartext token use is one way that someone could start gathering your tokens en masse.  An even scarier problem you are open to is a keylogger or other workstation compromise that has captured a token being used.

Pretend there was a keylogger trojan on one of your user’s workstations back in 2009.  Before it was removed/wiped/whatever, someone was able to capture a VPN login with username, tokencode, and PIN.  You know, “juser/908235123456”.  Oh, and for grins, the trojan also logged a decent timestamp:  “03/04/2009 17:34”.

Recovering the token based on this capture is a matter of grinding to find the appropriate seed.  The attacker just has to crank through each seed with nearby clock values (to account for drift), and id the seed that matches the tokencode generated…908235.  Now the attacker can simulate the correct token and pretend to be juser at will.

This breach not only put you in danger from the moment it happened (which is necessarily prior to when it was detected by RSA, which was before it  was reported by RSA…), it adds value to historical captures of SecurID authentications. Depending on when your tokens expire and how often you force PIN changes, that could be very, very bad.


Rogue CA risk.

March 30, 2011

What are you doing to mitigate the risk of untrustworthy certificate authorities?


Yes, those are four mistakes you will make in 2011

January 6, 2011

Recap: you will make mistakes.


Number 2: The Equivalency Trap

December 31, 2010

“Well, we already have Bad Thing X out there, and Y isn’t any worse…”

A year later, when the time comes to fix Bad Thing X:

“We already accept Bad Thing Y, and X isn’t any worse…”

Danger is cumulative.  Accepting risk should never imply that all risk of equal or lesser value is accepted.  Two risks of equal impact means there’s two ways the bad result can happen. Once you start down this road, inertia keeps all of it from getting fixed.

Alone in your office, it is safe to compare risk.  Out in public, comparison invites people to come out of the woodwork and claim they have that “risk of equal or lesser value” coupon.  You don’t stop wearing your seatbelt because you smoke three packs a day, and you don’t refuse to quit smoking because you’re an awful driver.


Number 3: All or Nothing

December 31, 2010

“We’re waiting on a complete fix”

…and waiting, and waiting…

Meanwhile, anyone looking to take advantage of your situation won’t be waiting for your fix.  They won’t wait for a patch from the vendor, or for a replacement to make it through your procurement process.  Timelines slip, so put something in place now to protect yourself.  The ones arguing that it’s too much work will be the ones arguing that it isn’t their fault that it never got fixed–after you get nailed.


Number 1: The Hubris of Prediction

December 29, 2010

“If it hasn’t been exploited yet, it probably won’t be.”

Repeat this ten times out loud:  You can’t predict the future.  We like to think we can “forecast” and come up with likely scenarios about what is going to happen, but it never works for long.  Risk management models fall apart in information security because there is so very, very much that you do not know.  The more specific you try to be, the shorter the lifetime of your prediction.

Of course, that’s nuance for people who understand forecasting and risk.  Headstrong predictions will be coming from the stakeholders who have to spend time and money to fix problems.  They will have tortured explanations–the whip marks still fresh–for why the problem of the day won’t ever really be a problem.  In these cases, it pays to be the only one in the organization with the official magic 8-ball.


Number 4: Settled for All Time

December 29, 2010

“We decided five years ago to accept this risk!”

When a decision is made, people involved are loathe to revisit it.  This is especially true when an organization decides that something is “worth the risk”.  The obvious problem is that situations and what you know change, which changes the risk equation that drove the decision in the first place.  It may be inconvenient, but every risk decision deserves periodic scrutiny.  Don’t accept risk without an expiration date.