Wednesday, October 21, 2009

My first identity and access management project

In a previous post (which I subsequently followed up with another), I mentioned the first Identity and Access Management (IAM) project I worked on. I also said I'd follow that post up with more details about the project I mentioned. It's been some time since I said that, but I've finally gotten around to it.

My very first IAM project was an extremely large one. It was for one of the larger Australian federal government agencies and I can't give specifics (or they'll hunt me down) so forgive me if I'm vague in certain parts.

They needed to re-engineer a core, critical business process from end-to-end. This meant a brand new system needed to be built and it ended up taking years. I personally spent almost 2 years early in my IBM career working on this monster as a consultant in the Security and Privacy practice and when I finished serving my time there (reference to prison completely intentional), they were still rolling out other functionality.

My job was done however, because we had finished laying down the whole security framework and it was working in production. The security system was actually very well architected thanks to the fact that it was designed by a very senior, very experienced, absolutely world class enterprise security architect (hi BP, I'm referring to you if you're reading this - probably not though so one of you other IBMers will have to tell him I said hi). This was actually the key. We could have slotted any equivalent product into the architecture and it would have served its purpose. Of course, being strategically aligned with IBM Tivoli meant this was what we used.

The project used a bunch of IBM software: WebSphere Application Server, MQ Series, DB2, some other IBM software to support EDI transactions (can't remember the names anymore) and of course IBM Tivoli Security software (specifically Tivoli Access Manager for e-business and Tivoli Directory Server). We even had full blown PKI software (from another vendor) to support signing of messages (for authentication purposes) and encryption. At the core of this mish-mash of software wrapped with services (provided by a consortium that was not limited to IBM alone) was Tivoli Access Manager for e-business (TAMeb). And what was its primary use? Fine-grained access management or as some of the market likes to call it today; entitlement management.

That's right, I was responsible for implementing fine-grained access/entitlement management in very first IAM project but I didn't know it at the time. Absolutely everything had to ask TAMeb before it could do anything. Want to show a button on a page? Ask TAMeb. Want to show a field on a page? Ask TAMeb. Want to allow someone to process a particular transaction? Ask TAMeb. Can this application send this message to this other application where the message is marked as secret and does it need to be encrypted as well? Ask TAMeb. No application security decisions were made without first making an authorisation call to TAMeb. None of this stuff involved web access management! Sure, we had to implement the web access management aspects too, but this was not the focus. We put in the web access management bits because it was mandated by the security architecture. But this was by no means a web access management project.

From an ease of development, time, manageability and subsequently cost standpoint, having all access control decisions managed centrally made perfect sense. I'm pretty sure everyone on that project would agree with me on this point. Here are a few reasons why they liked having a central access management point:
  • All teams could adhere to the same interface contracts when it came to authentication and authorisation. This also meant that if a development team couldn't get a security component working and everyone else could, it was their fault and usually could get it fixed fairly quickly because others knew how to do it properly.
  • No one had to write their own security sub-system because it was already sitting there waiting to be used and it worked extremely well. This meant they could spend time worrying about the more interesting things like business logic. I have yet to find a developer who likes writing security code (unless they are building a security product and even then it's debatable whether they actually like what they are doing) because it's simply a hurdle to getting the "real work" done.
  • Teams could re-use existing policies if required because they were mostly modelled on a business requirement.
  • No need to worry about policy modelling or management of security policies.
  • If a policy changed, it would be reflected across all systems. Without a central store, they would each need to worry about how to synchronise their policies so that there weren't any back doors to exploit. This alone is a whole sub-project on its own.
  • Security within each system was distilled down to a single statement: "Ask TAMeb". Compare this with having to worry about designing and building a security sub-system, designing and building a way to model identities, roles, policies, resources within the sub-system, designing and building a management layer on top of the sub-system and then worrying about how to ask the sub-system to make decisions from the main application. I'm talking best case scenario here of course because quite often, development teams simply use configuration files which are completely unmanageable (if you've ever written a Java Enterprise application and played with crappy deployment descriptors, you know what I mean). And if you understand the implications of using config files, you'll know that each time a security change is made you have to restart the application (which is going to screw with your SLAs) unless your vendor has some fancy way of dynamically updating in-memory application configuration settings. Oh, I haven't yet thought about how to synchronise security policies with the other systems floating around. Manually you say? Or use a provisioning product? Yeah it's possible. But it also means a heck of a lot more design and analysis work (in the case of the provisioning product). If you want to do it all manually, you can expect to have very frequent security incidents and lots of follow-up meetings with management to explain why it happened.

One of the biggest challenges was the huge number of transactions (and as a result, access control decisions) passing through the system due to the sheer size of the project. And to the credit of TAMeb, it scaled well and did the job. Of course, we had to do proper capacity planning and implemented multiple enforcement and decision points (PEPs and PDPs in the XACML world). And the Policy Administration Point (PAP)? This was a combination of the TAM administration console and an application we had to build to perform "identity management". Why did we have to build this? Because IBM hadn't acquired Access360 yet (which became Tivoli Identity Manager) and the existing IBM provisioning product was a piece of crap called Tivoli Identity Director which still relied on the Tivoli Framework (those with experience playing with the old framework know it's EXTREMELY painful).

I should explain why we had to build an "identity management" component on top of TAM. One major criticism of TAM when it comes to fine-grained access management is that it's not very good when you need to add a bit of context that relies on user attributes because:
  1. The admin console doesn't give you access to them (last time I checked). To play around with user attributes, you either need to access the LDAP directly or use a provisioning product like Tivoli Identity Manager.
  2. Contextual access control decisions based on user attributes are also not the easiest to model without a provisioning product to help. In short, you need to do it based on dynamic role memberships and have policies on resources (or entitlements) tied to these roles. Provisioning products can do this (cater for dynamic roles based on user attributes) out of the box and provision the required changes to the access management product in near real time.
In other words, we had to build the "identity management" piece to allow for contextual access control decisions based on user attributes. Nowadays of course, you can just use your favourite provisioning product.

The glaring omission from the picture is of course XACML. It wasn't even part of the IAM vocabulary at the time and the lack of XACML support in the project makes it very difficult for the government agency to swap TAMeb out of the picture (which IBM definitely isn't complaining about). But I'm guessing it's not a big deal for them because they spent a few million shed-loads worth of tax-payer's dollars to build this system and it works as designed. They're not about to replace the critical security component that makes all the decisions!

The motivation behind my occasional rants about the term "entitlement management" and how it's all too often used as a marketing gimmick to sell more products stems from my time on this project.

Broken record time: call your vendor out if they're blatantly repackaging fine-grained access management as "shiny-new-entitlement-management". If it's more along the lines of what the Burton Group thinks it should mean, we might start to get somewhere. It's still a moving target however, so I'm sure the definition will expand and evolve, especially with all this Cloud crap floating around.

1 comment:

Anonymous said...

If you have a second...take a look at Radiant Logic Virtual Directory.

Vince Hendrickson