Wednesday, November 11, 2009

IBM Redbooks rock

Many of my former IBM colleagues/friends like to bring up the fact that I'm rarely nice to IBM on this blog. They're right, but it's because I hold IBM up to high standards. When they don't deliver, I make it a point to voice my opinion. For example, I have no idea what the brains trust in the IBM Tivoli Security group over in Austin have been doing for the past few years but your products have stagnated. And the new ones you bring out aren't particularly exciting. It's disappointing to say this, but IBM are losing ground (falling behind if you listen to Forrester) in the Identity & Access Management stakes to Oracle and CA. Wake up product management team!

One of the BEST things out of IBM however, are their Redbooks. These are essentially step-by-step guides for implementing IBM technology with lots of background information thrown in and fictitious "real-world" scenarios. Not many people realise IBM publishes these books but they're great if you're doing anything with IBM software or hardware (and in many cases, even when you're not).

I'm bringing this up because I noticed they released the latest Identity Management Design Guide, which leverages Tivoli Identity Manager 5.1 a few days ago (the one I co-authored 2 iterations back was for version 4.5.1). It's a little thicker (i.e. it has more pages) than the version of the book I was involved with, but in looking through this latest version one thing stood out: I still recognise most of the content, especially the parts I wrote. It's nice to see the content is being leveraged in subsequent versions. What that means is that the materials produced with each iteration are solid and practical, which is true for pretty much all the Redbooks that get released.

In short, Redbooks are a great resource for:
1) People who want to learn about an IBM product or a subject area.
2) People who want to implement the relevant IBM product.
3) Competitors who want to take a detailed peek at an IBM product ;-)

I'm sorry IBMers, but I couldn't resist taking a stab at IBM Tivoli Security. The Redbooks however, remain one of the best things out of IBM. They are infinitely better than those marketing data sheets we've all wasted time reading.

Saturday, November 07, 2009

CA DLP headed in the right direction

When CA acquired Orchestria, I said it was a good move. I even wrote a follow-up post about why Identity & Access Management (IAM) and Data Security/Data Leakage Prevention (DLP) fit so well together. 2 weeks ago, CA sent out a fairly lengthy press release with a list of products they've updated. The 2 products that caught my eye were GRC Manager 2.5 and DLP 12.0. This post covers the DLP product.

I spoke with Gijo Mathew, Vice President of Security Management at CA about the DLP announcement to get a better understanding of CA's strategy in the longer term and clear up a few things which confused me with their press release. Here are the "new features" for DLP 12.0 which I've lifted from the release:
  • Enhanced Discovery – Provides the ability to scan data locally on endpoints and to scan directly into structured ODBC databases to identify sensitive data.
  • Extended Endpoint Control – Leverages existing data protection policies to control of end-user activity such as moving data to writable CDs or DVDs, and taking a screen print of sensitive content.
  • Seamless Archive Integration – Integrates with CA Message Manager, a product in CA’s Information Governance Suite, to help deliver end-to-end message surveillance, reporting, and archiving.
The first thing I should point out is that the ability to scan structured databases is a BIG plus. Many DLP vendors out there do quite a lot with either unstructured data (e.g. files on disk, data in memory) or structured data (e.g. databases), but they don't usually handle both. Orchestria fell into the "unstructured data" bucket. Now under the CA banner, they can finally support the ability to scan and classify data sitting in databases. Note however, that the ability to scan/identify/classify data and the ability to enforce controls over access to this data are completely separate things. To be able to properly enforce controls over structured data, a product would need to hook into the low level database security mechanisms. As a result, the enforcement of access controls into databases based on the content being accessed is difficult and very few vendors can actually do this at the moment (CA included).

While we're talking about scanning, CA also improved the way they scan for unstructured data. In previous versions, the scanning had to be performed from a central server. This is not ideal in many cases thanks to all the things that get in the way like firewall rules, security restrictions on machines, desktops not necessarily being available when required for scanning (either by being off the network or turned off) and so on. A more robust scanning strategy should support the ability to have the endpoints scan local data when required. It takes the load off the central server and allows for a more complete view of the environment from a data management standpoint. The new version of CA DLP added this capability. The negative however, is the performance hit taken by the endpoint while the scanning is being done (this is not a CA specific drawback - any endpoint scanner is going to impact performance).

The second point about the additional features around endpoint control (specifically regarding the mention of moving data to CDs, DVDs and controlling screen print events) really confused me. The examples given are supported by just about every single endpoint DLP vendor out there. I was shocked that Orchestria didn't have these capabilities. Alas, this was not the case. Gijo mentioned that they merely enhanced the capabilities around the CA DLP endpoint component and that these were some examples they picked out. The point CA were trying to make was around the fact that they still do the core DLP things expected of any DLP product worth implementing. Apparently after the previous release of DLP, many assumed they were no longer focusing on the core DLP capabilities and going down the "identity aware DLP" road. This is definitely not the case according to Gijo.

While the points mentioned in the press release are interesting in that they show CA are serious about core DLP capabilities, what impressed me most was the longer term vision CA has for the product. In fact, it is this longer term vision that had some accusing CA of neglecting their core DLP capabilities in the previous release.

CA are fortunate in that the natural evolution of products in the DLP space fit nicely with their need to work at integrating DLP with their portfolio of products. It makes product management decisions slightly easier for them instead of having to spend a lot of time trying to balance the need for additional features with being able to sell a cohesive suite of solutions (which is commonly the problem with acquisitions). In other words, adding integration points provides CA DLP with additional capabilities that make sense for most of the other products involved as well. For example:
  • The ability to add context to access control is a very powerful thing. Context is very much about information, with data at its core (although it's not everything, because data alone does not tell us what a user is actually doing). What I'm referring to is commonly labelled as content aware access management. A common use case here typically involves integration of access control decisions by a web access management component (Siteminder in CA's case) with data aware mechanisms provided by a data security solution (CA DLP in CA's case). The web access management product can either make decisions based on static tags on the information/resource being accessed or dynamic analysis made in real time by the data aware component (e.g. this data looks like a bunch of credit card numbers so we should not be giving the user access).
  • The analysis of data usage patterns across different environments allows for additional smarts when trying to manage risk, especially in cases where patterns are outside the norm of a user's peers. The trick here is being able to turn the data gathered into information to feed back to a GRC (Governance, Risk & Compliance) solution or SIEM (Security Information and Event Management) dashboard. Otherwise, you could just point any old reporting engine at the data and achieve the same result (which is far from what one would call proper integration).
  • Access and data governance are typically silos in organisations today. If you're able to tie the two together, the management overhead is reduced significantly. That's why it's a big deal if an organisation is able to get a single view of both from a management standpoint. This is not to say it cannot be done today. The key point I'm making here is that it's just really hard to do. If a vendor makes it that much easier to achieve, it saves time and money.
  • Improving the lifecycle activities around enterprise information and content management by using the data discovery and classification capabilities to provide additional context to the relevant processes.
I'll leave it as an exercise for the reader to figure out which CA product/s to slot into each example. The point is, they have something in their product stack to integrate with DLP in each example. What these illustrate however, is the direction CA are headed in with regards to the DLP strategy (even though some of it is a little high level).

Gijo was honest in acknowledging they don't have a lot of the things they want out of the box just yet. At this stage, many of the things I've mentioned (in terms of product strategy) will require a good amount of services work. I'm not going to criticise them for this as they only acquired Orchestria earlier this year and it's unrealistic to expect all the required integration to be built out so quickly, especially with a whole suite of products like CA's. What I do like a lot, is where they're going.

CA's strategy is good. They're on a journey and their DLP product is the jewel in their security suite from a competitive standpoint (against the other big IAM vendors). They also stack up well against their competitors in the data security space; in this case the advantage comes in the form of their IAM suite (and to a certain extent, their ever improving GRC prowess), which other data security vendors do not have. Those familiar with the security space might notice I haven't made any mention of the fact that RSA also have both IAM and DLP capabilities. Don't forget however, that it's a bit of a stretch to call RSA's IAM capabilities a suite (e.g. they don't do provisioning). They also have no real GRC capabilities to speak of (their GRC page is a bit of a joke).

As long as CA don't neglect the core data security capabilities in DLP along the way, they're going to do just fine.

Friday, October 30, 2009

CA GRC Manager adds IT GRC focus

Earlier last week, CA sent out a fairly lengthy press release with a list of products they've updated. The 2 products that caught my eye were GRC Manager 2.5 and DLP 12.0. This post covers GRC Manager.

Back in February, I spoke to CA about their 2.0 release of GRC Manager. Then, it was all about what they called RiskIQ and turning raw data into useful information to better manage risk and compliance. To me, version 2.0 marked the real arrival of CA as a GRC vendor to contend with because it showed they were serious and that the 1.0 version wasn't a flash-in-the-pan-side-project they thought they'd try out to see what would happen.

Late last week, I spoke with Marc Camm (SVP & GM, Governance, Risk and Compliance Products) and Tom McHale (VP of Product Management for CA GRC Manager) to see what they had to say about the new release.

The message that came through loud and clear was that version 2.5 is very much about IT GRC. If you're interested in specific new features (I don't normally do this but since it was a long press release), here's the relevant section lifted directly from CA's press release:
  • Automated Questionnaires - Allows customers to easily create, distribute, and analyze the results of questionnaires for risk and compliance controls assessments.
  • Robust Reporting Engine - Provides a set of pre-defined, role-based reports, as well as easily configured reports for local needs.
  • Ongoing IT Controls Monitoring - Automates input of IT controls status information into CA GRC Manager and provides a single view of overall IT risk and compliance profiles.
  • Extensions to IT Control Framework - Supports mapping between individual controls and authority documents, featuring a library of more than 400 regulations with mappings to IT controls from the Unified Compliance Framework.
  • Streamlined Management of Select FISMA Requirements - Offers a centrally managed information security system with extensive dashboards and reports, providing instant, comprehensive information about controls and processes related to Federal Information Security Management Act (FISMA) requirements.

The new features and focus on IT GRC came about through feedback the product management team gathered from existing GRC Manager customers. Reading between the lines, it also looks like CA tried to make version 2.5 of the product much more usable (I'm in no way suggesting 1.0 was not usable).

Some examples mentioned include:
  • Dashboards improvements to allow for better navigation between risks, controls, application contexts etc.
  • Standard, pre-configured roles included out of the box for better support from day one. In a way this could be viewed as "best practice" roles for controlling access to various parts of the application and actions performed.
  • Extended functionality within the reporting engine to allow users to customise pre-built (out of the box) reports without having to build their own from scratch all the time.
The addition of FISMA requirements and extended Unified Compliance Framework support are further evidence of this.

That's not to say there isn't any work to be done from an implementation standpoint. It's a GRC product. Anyone who thinks you can implement a GRC product without a good amount of internal effort (and external help) is delusional. What I think CA's tried to do is make GRC Manager more of an enabler for Enterprise GRC; in other words, they want to help fast-track efforts by providing as much up front as possible.

One thing that's interested me for some time is the notion of managed services (I even ran a survey to try to find out more). As a result, I couldn't help but ask Marc and Tom whether any of their customers actually use CA GRC Manager On Demand (the hosted version of the product). Apparently 20% (I won't hold CA to this number as it was just a rough figure) of their customers use GRC Manager in this capacity with a bunch of others wanting to migrate.

The fact that this version still starts with a "2" isn't lost on me. It's not a major release in the traditional sense, but CA added enough features to warrant them making some noise about it. I'll be interested to see what version 3 holds, but I'm even more interested in the percentage of customers that end up going with GRC Manager On Demand in the next release.

Thursday, October 29, 2009

CA's been busy

Earlier last week, CA sent out a fairly lengthy press release with a list of products they've updated; I guess they've been busy.

That said, most of the updates were fairly minor:
  • Access Control got some privileged user management teeth.
  • Identity Manager got more hooks into Role & Compliance Manager to give us "Smart Provisioning". According to CA, this means they now provide: "the capability during the provisioning process to prevent business and regulatory policy violations. The software will proactively check for things like SOD (segregation of duties) violations during the provisioning process; it will issue an alert when an entitlement has been assigned that is significantly out-of-pattern or different from the person’s peers; and it will help with productivity and efficiency by suggesting roles that may be useful to a person when compared to the roles of his or her peers."
  • Records Manager got some stuff but I fell asleep reading about it.
The 2 products that caught my eye were GRC Manager and DLP. I'll be writing 2 follow-up posts about them. Stay tuned.

Wednesday, October 21, 2009

My first identity and access management project

In a previous post (which I subsequently followed up with another), I mentioned the first Identity and Access Management (IAM) project I worked on. I also said I'd follow that post up with more details about the project I mentioned. It's been some time since I said that, but I've finally gotten around to it.

My very first IAM project was an extremely large one. It was for one of the larger Australian federal government agencies and I can't give specifics (or they'll hunt me down) so forgive me if I'm vague in certain parts.

They needed to re-engineer a core, critical business process from end-to-end. This meant a brand new system needed to be built and it ended up taking years. I personally spent almost 2 years early in my IBM career working on this monster as a consultant in the Security and Privacy practice and when I finished serving my time there (reference to prison completely intentional), they were still rolling out other functionality.

My job was done however, because we had finished laying down the whole security framework and it was working in production. The security system was actually very well architected thanks to the fact that it was designed by a very senior, very experienced, absolutely world class enterprise security architect (hi BP, I'm referring to you if you're reading this - probably not though so one of you other IBMers will have to tell him I said hi). This was actually the key. We could have slotted any equivalent product into the architecture and it would have served its purpose. Of course, being strategically aligned with IBM Tivoli meant this was what we used.

The project used a bunch of IBM software: WebSphere Application Server, MQ Series, DB2, some other IBM software to support EDI transactions (can't remember the names anymore) and of course IBM Tivoli Security software (specifically Tivoli Access Manager for e-business and Tivoli Directory Server). We even had full blown PKI software (from another vendor) to support signing of messages (for authentication purposes) and encryption. At the core of this mish-mash of software wrapped with services (provided by a consortium that was not limited to IBM alone) was Tivoli Access Manager for e-business (TAMeb). And what was its primary use? Fine-grained access management or as some of the market likes to call it today; entitlement management.

That's right, I was responsible for implementing fine-grained access/entitlement management in very first IAM project but I didn't know it at the time. Absolutely everything had to ask TAMeb before it could do anything. Want to show a button on a page? Ask TAMeb. Want to show a field on a page? Ask TAMeb. Want to allow someone to process a particular transaction? Ask TAMeb. Can this application send this message to this other application where the message is marked as secret and does it need to be encrypted as well? Ask TAMeb. No application security decisions were made without first making an authorisation call to TAMeb. None of this stuff involved web access management! Sure, we had to implement the web access management aspects too, but this was not the focus. We put in the web access management bits because it was mandated by the security architecture. But this was by no means a web access management project.

From an ease of development, time, manageability and subsequently cost standpoint, having all access control decisions managed centrally made perfect sense. I'm pretty sure everyone on that project would agree with me on this point. Here are a few reasons why they liked having a central access management point:
  • All teams could adhere to the same interface contracts when it came to authentication and authorisation. This also meant that if a development team couldn't get a security component working and everyone else could, it was their fault and usually could get it fixed fairly quickly because others knew how to do it properly.
  • No one had to write their own security sub-system because it was already sitting there waiting to be used and it worked extremely well. This meant they could spend time worrying about the more interesting things like business logic. I have yet to find a developer who likes writing security code (unless they are building a security product and even then it's debatable whether they actually like what they are doing) because it's simply a hurdle to getting the "real work" done.
  • Teams could re-use existing policies if required because they were mostly modelled on a business requirement.
  • No need to worry about policy modelling or management of security policies.
  • If a policy changed, it would be reflected across all systems. Without a central store, they would each need to worry about how to synchronise their policies so that there weren't any back doors to exploit. This alone is a whole sub-project on its own.
  • Security within each system was distilled down to a single statement: "Ask TAMeb". Compare this with having to worry about designing and building a security sub-system, designing and building a way to model identities, roles, policies, resources within the sub-system, designing and building a management layer on top of the sub-system and then worrying about how to ask the sub-system to make decisions from the main application. I'm talking best case scenario here of course because quite often, development teams simply use configuration files which are completely unmanageable (if you've ever written a Java Enterprise application and played with crappy deployment descriptors, you know what I mean). And if you understand the implications of using config files, you'll know that each time a security change is made you have to restart the application (which is going to screw with your SLAs) unless your vendor has some fancy way of dynamically updating in-memory application configuration settings. Oh, I haven't yet thought about how to synchronise security policies with the other systems floating around. Manually you say? Or use a provisioning product? Yeah it's possible. But it also means a heck of a lot more design and analysis work (in the case of the provisioning product). If you want to do it all manually, you can expect to have very frequent security incidents and lots of follow-up meetings with management to explain why it happened.

One of the biggest challenges was the huge number of transactions (and as a result, access control decisions) passing through the system due to the sheer size of the project. And to the credit of TAMeb, it scaled well and did the job. Of course, we had to do proper capacity planning and implemented multiple enforcement and decision points (PEPs and PDPs in the XACML world). And the Policy Administration Point (PAP)? This was a combination of the TAM administration console and an application we had to build to perform "identity management". Why did we have to build this? Because IBM hadn't acquired Access360 yet (which became Tivoli Identity Manager) and the existing IBM provisioning product was a piece of crap called Tivoli Identity Director which still relied on the Tivoli Framework (those with experience playing with the old framework know it's EXTREMELY painful).

I should explain why we had to build an "identity management" component on top of TAM. One major criticism of TAM when it comes to fine-grained access management is that it's not very good when you need to add a bit of context that relies on user attributes because:
  1. The admin console doesn't give you access to them (last time I checked). To play around with user attributes, you either need to access the LDAP directly or use a provisioning product like Tivoli Identity Manager.
  2. Contextual access control decisions based on user attributes are also not the easiest to model without a provisioning product to help. In short, you need to do it based on dynamic role memberships and have policies on resources (or entitlements) tied to these roles. Provisioning products can do this (cater for dynamic roles based on user attributes) out of the box and provision the required changes to the access management product in near real time.
In other words, we had to build the "identity management" piece to allow for contextual access control decisions based on user attributes. Nowadays of course, you can just use your favourite provisioning product.

The glaring omission from the picture is of course XACML. It wasn't even part of the IAM vocabulary at the time and the lack of XACML support in the project makes it very difficult for the government agency to swap TAMeb out of the picture (which IBM definitely isn't complaining about). But I'm guessing it's not a big deal for them because they spent a few million shed-loads worth of tax-payer's dollars to build this system and it works as designed. They're not about to replace the critical security component that makes all the decisions!

The motivation behind my occasional rants about the term "entitlement management" and how it's all too often used as a marketing gimmick to sell more products stems from my time on this project.

Broken record time: call your vendor out if they're blatantly repackaging fine-grained access management as "shiny-new-entitlement-management". If it's more along the lines of what the Burton Group thinks it should mean, we might start to get somewhere. It's still a moving target however, so I'm sure the definition will expand and evolve, especially with all this Cloud crap floating around.

Wednesday, June 10, 2009

Out of the office and moving home

If you follow me on Twitter, you'll know I've just started a long holiday through Europe and the US. I won't be back until late July. Internet access will be intermittent, so please don't be offended if I don't respond to your message as quickly as expected.

As for where I'll be when I'm done with this trip; the answer is Sydney, Australia. I'm moving back home for the moment thanks to UK government incompetence. I won't get into the details because the fact that I'll be in Sydney for the next year or so (at least) won't change.

Look forward to catching up with a few of you when I get to the US. I'll ping you closer to the date as promised.

Friday, May 15, 2009

Spinning entitlements

A few of us (e.g. me, Nishant Kaushik, Matt Flynn, Paul Madsen, Ian Glazer) have been lamenting the negative effect Twitter has had on our blogging frequency. As a result, we've also had a lack of discussion via our blogs. But Twitter discussions can only go so far.

As such, my previous post about entitlement management was intended to stir the pot a little bit (which is why I posted the "equation" as a hypothesis) and also bring to the surface one of my pet peeves with how some vendors position entitlement management.

I wanted to achieve 2 things:
  1. Point out what you're actually getting if you buy an entitlement management product from an access management vendor.
  2. If we MUST use the term "entitlement management", we should at least have a good reason for doing so instead of using it to sell more products by re-branding an old concept.
I purposely focused only on the authorisation/access management side of things to serve the first part of the agenda. That is, if we look at the entitlement management products from access management vendors today, the hypothesis that entitlement management = fine-grained access management + XACML holds. In other words, that's all you are buying.

The second point required discussion so I conveniently ignored everything else that I've heard people refer to as "entitlement management". The most common example is in relation to IT Governance, Risk and Compliance (GRC), specifically identity compliance and attestation-type activities. I don't typically refer to this as "entitlement management", but others do.

These are the 2 obvious sides to the entitlement management coin, but there are others that could conceivably pop-up. For example, Eve Maler brings up the possibility that Vendor Relationship Management her ProtectServe/"relationship management" proposal could potentially be thought of as "Enterprise 2.0 entitlement management" (update: Eve clarified that she meant ProtectServe rather than VRM via the comments to this post).

It's clear that the term "entitlement management" means different things to different people. It's a bit like the spinning lady illusion (where depending on how you focus, you see it spinning in one direction while others see it spinning the opposite way). In our case, some might not think we're even looking at a lady's silhouette. I don't like things to be complex (which makes people wonder why I chose this line of work). But it's also for this reason that I don't like the term "entitlement management". It confuses the heck out of everyone!

I received a few email responses. There were also a couple of opinions on Twitter, a few more in the comments to my post (which I responded to there) and a handful of blog responses (Nishant, Ian Glazer and Dave Kearns). Some agreed, others partially and some not at all.

Before I move on, I should clear something up. Paul Toal (from Oracle) comments:
"I think you are making a huge assumption that entitlements management is only related to the web access layer".

I'm not. And that is part of the problem with calling all these access management products "web access management" products. It makes everyone assume that's all these products do. And that was part of my point; that they can do so much more. Simply calling them "web access management" products does their capabilities an injustice (although some have less fine-grained access management capabilities than others). I should be able to add a little more context when I write about my first Identity and Access Management project (as promised in my previous post) but I won't talk about this "web access management" generalisation today because we'll lose focus.

In reading through the opinions of others, I noticed one thing in general; most toed the company line, which was not unexpected:
  • People from Sun more or less agreed (probably because I said I liked how they are approaching it with OpenSSO).
  • People from Oracle tended to use fine-grained access management as a baseline and expanded on it to include centralisation across all platforms with a common enterprise services view on everything.
  • Consultants didn't like having a new term to define something they've been doing for a long time and find it difficult to explain it to customers without confusing them.
  • Those who have been around and know security but don't have as much of a vested interest in "entitlement management" like to take a more pragmatic approach in trying to segment the definitions into manageable portions that make sense while not discounting the term altogether.
  • Some of the more sales-oriented folk who don't have to sell a so-called "entitlement management" product didn't like having to explain the term because it distracts customers. Either that or they worry about the need to "shoe-horn" their product/s into the "entitlement management" hype if that's all they hear about when doing sales calls.
I more or less agree with what Nishant says in his blog post, but not Oracle's strategy in having 2 separate products. Then again, Oracle Entitlements Server was brought on board via their acquisition of BEA so I can understand why they chose to keep it separate. It probably didn't make much financial sense to try to roll one product into the other. In their case, cash won out over idealism.

Ian Glazer has 4 problems with my hypothesis/definition. I'll address each here:
  1. "Definitions that include a protocol are worrisome as they can overly restrict the definition" - He has a point. I should probably have said "an authorisation/access management policy standard". But in reality, my view is that it'll be XACML or some evolution of it. So if we wanted to make an academic definition, then we should probably not say "XACML". I'm not a fan of academic definitions though, otherwise I would have accepted the PhD offer from my university professor instead of joining IBM.
  2. "I fear this definition is a reflection of products in the market today and not a statement on what “entitlement management” is meant to do" - Bingo! Ian Glazer's just summed up a lot of what I'm getting at. I DID define it as per what the access management vendors think it should mean. Whether this is what it should mean is up for debate.
  3. "There is something missing from the definition – the policy enforcement point" - True, if you think the definition should include the terms: policy administration point (PAP), policy decision point (PDP) and policy enforcement point (PEP). Then again, I didn't have PAP or PDP in the hypothesis either. It comes down to whether one thinks that access management products as they exists today have PAPs, PDPs and PEPs. Even so, I personally think they should be in the finer details and not the definition. As an aside, access management products with fine-grained authorisation capabilities do have some PEPs. But PEPs are a little bit like connectors in provisioning products; one size does not fit all. There's usually some work that needs to be done to build a PEP that works for whatever system you are hooking into the access management product. The obvious exception here is the web reverse proxy component in access management products because they can assume a standard protocol is in place for web-based interactions. Of course, the web reverse proxy does coarse-grained access management (aka "web access management") by many common definitions. But now I'm just confusing the matter so I'll stop :-)
  4. "I have a problem with the phrase “entitlement management” do not use the phrase “entitlement management” the same way we do" - It's because the vendors have managed to confuse customers. On one hand, we have the "entitlement management" that vendors use to describe run-time authorisation (which I've been referring to as access management). On the other hand, we have vendors that talk about certain aspects of IT GRC as being "entitlement management". Based on what Ian Glazer is saying, the companies he's talked to see it more from an IT GRC (or operational) perspective.

Ian Glazer goes on to say:
"Using a single term (“entitlement management”) to span both the run-time authorization decisions as well as the necessary legwork of gathering, interpreting, and cleansing entitlements can lead to confusion."

Yeah, tell me about it. Unfortunately, this is what we have today. It's an overloaded term which has already lead to confusion. At this point however, I'm unsure what Ian Glazer's position on the whole matter is. On one hand, he's helping round out the entitlement management definition from an access management (or run-time authorisation as he puts it) standpoint but on the other hand alluding to the fact that including this aspect in the overall definition can lead to confusion.

Dave Kearns doesn't agree with the hypothesis either. He also says he doesn't quite agree with Ian Glazer but I think he's actually disagreeing with Burton Group's perception of how enterprises define entitlement management. As I said, I'm not quite sure how Burton Group wants to define it. No matter. At least it prompted an opinion from Dave. But he goes on to say that he thinks entitlements should be tied to roles, not individuals:
"Differentiate entitlement management from access management, also (else, why use both terms?). Individuals get access, roles/groups get entitlements. Access is granted to resources (hardware, applications, services, etc.) while entitlements specify what a particular role/group can do with or within that resource."

If I were to simplify this, I would interpret it as saying access and entitlements are more or less the same thing but with intricate, important differences in terms of how you look at it. You use one term for individuals and the other term for roles. I do agree with Dave from a conceptual point of view, but I don't see the need to differentiate individuals and roles here because we're getting very close to implementation specifics if we do and are in danger of adding to the confusion.

Neil Readshaw, an ex-colleague of mine from IBM and a worldwide authority on Tivoli Security products says (via a comment in response to my previous post):
"I try to talk about authorization when taking a resource centric view (e.g. who can do something), and entitlements for a user centric view (e.g. what can this user do). In the end, it may be the same data making both of those determinations."

Neil and Dave are talking about pretty much the same thing but I prefer Neil's version because it takes a higher-level approach and avoids specifics like mappings between individuals, roles and resources. Neil's statement about "the same data making both of those determinations" however, begs the question: why do we need a separate product to do entitlement management?

I did find this statement by Ian Glazer interesting though:
"A bit of history – three or so years ago Burton Group, at a Catalyst, introduced the phrase “entitlement management” to include the run-time authorization decision process that most of the industry referred to as “fine-grained authorization.” At the time, this seemed about right. Flash forward to this year and our latest research and we have learned that our definition was too narrow."

The automatic assumption would be to blame Burton Group for this mess. But if you think about it, it's not really their fault. It's the fault of all the vendors for jumping on the bandwagon and hyping it to the point it's now out of control and we find ourselves having to come to terms with some sort of definition so we can see through the entitlement fog.

Burton Group's views seem to have evolved and the vendors are stuck with a definition a couple of years old. My objection all along has been to vendors jumping on the bandwagon and trying to convince everyone that it's such a "must have", new concept to the point where customers need to buy their new "entitlement management" product before understanding what it really means. Worse still, a few went out and built brand new products and slapped the name "entitlement manager" (or a variant) on it. Would it not have made more sense to allow their access management products to evolve naturally with business needs and keep calling them "access management solutions"? Why did they have to go and confuse the matter and in the process force customers to support separate data and policy stores for products with a lot of overlapping capabilities?

If we must have the term "entitlement management" hanging around in the lingo, we need to re-adjust our understanding of what it means. Once this is done, perhaps the vendors will rename/re-think their "entitlement management" products or roll them into their access management offerings like they should have done in the first place. Analysts like Ian Glazer should be able to help the market define this because they talk to companies about what they are actually doing. My objection isn't with the industry as a whole for talking about entitlement management. It's out in the wild now and nothing any of us can do will put the entitlement genie back in the bottle. But we shouldn't be re-branding an old concept just to sell more products.

If this discussion doesn't get us closer to a good definition, I'd like at least one thing to happen. If you are looking at an entitlement management product, please take a step back to understand what it is you actually want. You might already have the tools to do it. If not, make sure you're buying the right tool for the right reason. Not just because the vendor tells you it's their "entitlement management" product and you MUST have it because you are looking at user entitlements in your environment.

Tuesday, May 12, 2009

The entitlement and access management equation

Hopefully I'm not having a "mad-scientist moment"...

Web access management = Coarse-grained access management
Entitlement management = Fine-grained access management + XACML

I've always poo poo-ed the whole notion of entitlement management and even blogged about it some time ago (Daniel Raskin from Sun had similar thoughts). It's not because I don't think it's necessary; quite the contrary.

Good access management is crucial within any enterprise security architecture. It doesn't need to be the first thing you implement, but it should be pretty darned high on the list. It saves time and money in the long run and makes things so much easier to design, build and manage. Not to mention when the audit and compliance people come knocking, a lot of the information is available from a central location (I didn't say "all the information" because it's wishful thinking that any organisation can get every system's access controls centrally managed).

Notice something? I said access management, because that's all entitlement management really is; fine-grained access management. The main reason I've never particularly been taken by the whole notion of entitlement management was because I didn't agree with the industry's need to tag it as such. As far as I was concerned, it was just access management (or authorisation as some prefer to say). There was no need for a new name because we've all been doing access control for a long time right? Apparently not. The marketing machines started spinning not so long ago and all of a sudden, it's "New School Identity Management" (note: you may have to register to read the article).

The marketing people would have us believe that before entitlement management products came along, all the industry was capable of doing was web access management. That is, URL-level access controls. And if we needed to have finer-grained access controls, we would just throw our hands up and say "nup, product doesn't do that and we'll have to write our own or just ignore it". As Guy Kawasaki would say, this is BULL-SHIITAKE! Or perhaps I joined the Identity and Access Management (IAM) world with rose-coloured glasses.

Early on in my stint at IBM, I worked for the Security and Privacy Services team. My very first Identity and Access Management (IAM) project was actually entitlement management focused with some aspects around web access management. At the time, the term "entitlement management" didn't exist the way it does today. As far as we were concerned, we were implementing access management; both coarse and fine-grained. And it was on this project that I first got my hands VERY dirty with Tivoli Access Manager for e-business (TAM) and Tivoli Directory Server (TDS). I'll talk more about this in a follow-up blog post or this will get too long and you'll all fall asleep.

I've had a few more opportunities to work with TAM (and all the other IBM Tivoli Security products) and it is because of my various experiences with the products that I haven't stopped wondering why companies like IBM bothered to build a completely brand new product to handle entitlement management (in IBM's case they now have Tivoli Security Policy Manager). TAM does have a few potential issues when it comes to entitlement management as we know it today. For example:
  1. If you need to model contextual policies that rely on user attributes and getting your hands dirty scares you, then you'll find it a little bit challenging.
  2. It doesn't have standard support for eXtensible Access Control Markup Language (XACML) out of the box.
  3. It doesn't have a nice, standardised web services layer for other applications to integrate with (it does however, have APIs but these aren't typically what we would call "services").
That said, TAM did have a XACML toolkit which (update: Neil Readshaw, one of my ex-colleagues at IBM and a worldwide authority on TAM assures me there was never any XACML support in TAM whatsoever. I must have been drinking too much of the kool-aid when I worked for IBM Tivoli) I'm sure product management and development could have rolled XACML support into the core product. They could also have very easily opened up the console (or even built a whole new one that was easier to use) to allow for easier administration of contextual access policies and even expanded on the policy model to do more fancy things (alternatively you can use a provisioning product to fill in the gaps). As for web services, that isn't too difficult to build into TAM. They built a whole federated identity management product on top of it. Why not an entitlement management one?

I've seen Tivoli Security Policy Manager (TSPM) in action and it doesn't add very much on top of what TAM already does (and has been doing for years). And now, customers have to manage 2 separate policy stores for 2 products that do very similar things! You heard me right; TAM and TSPM have different policy stores and management interfaces. I realise I'm picking on IBM yet again, but this is true of many vendors that have separate web access management and entitlement management products.

So why the sudden emergence of entitlement management when I insist it's been there all along? If you followed the link back to my earlier rants about entitlement management, you may have noticed I had a rather involved public conversation with Securent's (now owned by Cisco) CEO Rajiv Gupta. Upon reading the discussion again, we seemed to be debating entitlement management vs fine-grained authorisation based on different definitions, assumptions and agendas.

This suggests that what fine-grained authorisation lacked was a standard way to define everything; enter XACML. If we could talk about everything using common terms, an understanding of where everything fit and what a policy looks like, then we might get somewhere. The same goes for systems: there needed to be a way for them to leverage authorisation services in a standarised way. So I put forward that entitlement management is simply fine-grained authorisation + XACML. Standardisation and interoperability were the missing ingredients in taking fine-grained authorisation mainstream. Perhaps another ingredient was that many companies weren't ready to worry about fine-grained authorisation yet as they were still busy with provisioning and coarse-grained authorisation (i.e. web access management).

I'll finish with the following points:
  1. If you can't tell, I like TAM (but I'm biased because I cut my IAM teeth using it).
  2. If you have a web access management product, make an effort to sit down and evaluate whether you REALLY need an entitlement management product. If you also have a provisioning product on top of your web access management product, then you REALLY need to think about whether you REALLY need an entitlement management product. Is XACML support a good enough reason to buy a whole new product? Do you really need XACML at this stage of your project? Can you talk the vendor into building XACML support into their web access management product?
  3. I like Sun's approach of rolling fine-grained access control/entitlement management capabilities into their core web access management product (note: unfortunately, I have a feeling Oracle will throw OpenSSO aside in favour of their existing products - which I should point out are 2 separate products).

Wednesday, April 29, 2009

CA continues to round out their security portfolio

Lots of interesting things happened last week in the information security space. This was largely due to the RSA Conference and the number of company announcements that coincide with it each year. Of course, Oracle stole much of the thunder by announcing their acquisition of Sun Microsystems. I've chosen not to comment on it because there's enough speculation out there (some informed, some less so). Also, I would have sounded like a broken record because any analysis on my part would have sounded very similar to my piece on the potential IBM and Sun deal that eventually fell through, but with an Oracle spin.

From a large Identity and Access Management (IAM) vendor standpoint, the most interesting piece of news actually came from CA. In fact, the only bit of IAM vendor news came from CA because the others didn't announce anything at all (I don't count "what the heck is going to happen to the Sun IAM stack" as news because at this point it's all speculation and is very much dependent on what Mr Ellison and his cohorts decide to do once the deal closes and the dust has settled).

I've written about how CA has been running faster than their competitors since late last year and they haven't stopped if the latest announcements are anything to go by. They actually made 3 announcements around RSA; the first was a pointer to Dave Hansen's (Corporate Senior Vice President and General Manager of CA’s Security Management business unit) keynote at RSA (the video of the keynote is here), the second I'll talk about in the next paragraph and the third related to a survey conducted by CA which Dave also referenced in his keynote.

The second announcement was the most interesting as it involved news around their portfolio, where they announced 3 new products:
Note: The DLP acronym generally stands for Data Leakage/Loss Prevention

I spoke to Dave Arbeitel (Vice President of Product Management for the Security Management Business Unit) about the new products late last week and got to find out a little bit more.

I hadn't actually noticed this but Dave pointed out that CA's approach to security management is now solution-focused and grouped as follows:
The large IAM vendors are split between being product-focused and solution-focused and the approach taken is very much dependent on the overall company strategy. One thing I should note is that being solution-focused is fine as long as you don't get too smart for your own good and confuse customers (as I've accused IBM of doing on occasion).

Each of the 3 new products fits into one of the solution categories. My interpretation of the solution areas is that CA seems to have grouped what they deem to be the most complementary products together. The most interesting thing to note is that they've grouped CA Access Control together with CA DLP. This makes sense and is evidence that CA gets DLP and are starting to implement a strategy around how IAM and DLP can work together effectively. I'm not saying they get it completely yet, but this is not necessarily a bad thing. The industry as a whole doesn't quite understand the IAM/DLP/Data Security overlap at the moment. At least CA are trying to work it out by putting their money where their mouth is. But I'd caution them against putting too much of a marketing spin on things because people (like me) will call them out when required.

They key thing Dave wanted to get across was that CA has a broader security management strategy and these product announcements are simply steps along the execution path. This has been apparent to those of us following the market over the past couple of months and if CA keeps going, they're going to do just fine as long as they execute well.

I didn't get too much into product features of the Role & Compliance Manager and DLP products with Dave because Eurekify and Orchestria had relatively mature products. There wasn't much point in trying to pick those products apart. The only noteworthy change was that CA combined Eurekify's products into the single product (for those that are unaware, Eurekify had a separate compliance management product that integrated with their role management product). Dave also noted that the new products were not just a re-brand. CA's done additional development work to add functionality and integration points into the existing CA IAM suite.

While we're on the point of integration into the existing IAM suite, I'd like to pinpoint the supposed deep integration and "identity-awareness" of the DLP product. I had a chuckle watching Dave's keynote (and it wasn't from watching the almost cringe-worthy parody of "The Office"). During the demo, they supposedly showed identity management and data security integration. For anyone who hadn't seen a DLP product in action, it looked pretty slick/impressive.

As someone who has demonstrated a DLP product hundreds of times (maybe even thousands - I lost count after a few months) I can tell you that most of the demo only showed the DLP product in action. The solitary identity bit was the de-provisioning of the user (Dave) from a role (which took away access to the SAP application in the demo). Apart from the fact that CA Identity Manager probably has a standard connector into CA DLP to provision and de-provision access for users, they weren't doing anything in the demo that anyone else couldn't do by taking a decent provisioning product and building a connector into a good DLP product. Unfortunately CA, this isn't what I'd call identity-aware-DLP. I realise I may be dismissing other potentially (but unknown to me) nice integration points between DLP and CA's Identity and Access Management suite but I'm going based on the demo and calling it as I see it.

I did try to dig a little deeper into Enterprise Log Manager's features however, mainly because it's brand-spanking new. The only problem with Security Information and Event Management (SIEM) products is that you can't really get a handle on how good a product is until you get your hands on it. Dave assured me that installation is a breeze and that it can even be deployed as a virtual appliance, which I have no reason to doubt. From a technical standpoint, this is not difficult to achieve.

Good SIEM products tend to be measured by the ease of integration, number of standard collectors to other systems and reporting capabilities. The questions I asked Dave were driven by these factors and I gathered that Enterprise Log Manager is still very much a 1.0 product (that is, fairly immature). As an example, Dave mentioned that the product was tightly coupled with their IAM solutions. CA is probably referring to the fact that they can reference policies defined in some of their IAM products (although I'm not sure how deep or wide this integration runs) and have Enterprise Log Manager report on policy violations. But from a customer standpoint, I would expect that this also means I can point Enterprise Log Manager at any CA IAM product and have it be able to collect all relevant user events and report on them without much effort. Unfortunately, this is not the case (I'm sure CA will correct me if I misinterpreted Dave's comments). There needs to be some level of work done to have collectors that can pull information out of the other CA IAM products.

This is not to say there aren't any standard collectors, but I got the impression that this covers the main operating systems and some standard security devices but not much else. The thing about a lack of collectors however, is that the issue fixes itself over time because the more a product is deployed, the longer the list of standard collectors gets. CA needs to build standard collectors for their other IAM products sooner rather than later (I would start with Access Manager, Access Control and DLP). You cannot claim to have tight integration with your own suite of products if you don't at least have these products sorted.

The reporting capabilities seem to be a little more fleshed out. The vision for reporting is that customers use a combination of standard reports, services and new report packs that CA sends out from time to time. The list of standard reports includes many of the usual regulatory suspects, but in my experience these types of standard reports tend to need customisation to meet business needs. For customers that don't feel like using the standard reporting interface, there is a level of integration with SAP BusinessObjects Crystal Reports.

I'm not trying to belittle CA's SIEM efforts. They obviously see SIEM as part of their strategy, but they are a little late to the party on this. It doesn't preclude them from trying however, and at least they have now arrived at the party. I think they knew they weren't going to get a market-leading product at the first attempt. They made the decision to build the product from scratch and they would have been foolish or delusional to expect a world-beater at the first attempt. It does seem a little puzzling why they didn't choose to acquire a leading SIEM player and went with the build approach instead.

As is the norm with these discussions, I tend to ask about things not related to the news at hand as we move past the main items of discussion.

My first unrelated question related to the IDFocus product they acquired and whether any part of that solution made it into the 3 new products. The answer was no because even though it has some level of potential integration with role and compliance efforts, it fits best into the Identity Manager product where it helps to link business processes with provisioning requirements.

My second unrelated question was around the notion of having a central policy management point for all the products (like Symantec and McAfee are trying to do with their own products). The point of this question was to gauge if CA's strategy includes the centralisation of policy management. I didn't expect much because it's actually a VERY difficult thing to do and very few of the large IAM suite vendors have the appetite to invest in this area. I'm not talking about the engineering aspect, which is simple when compared to the actual analysis behind how one would rationalise all the different ways policies could be represented and trying to figure out how to apply an over-arching model to a large portfolio of products. Add DLP to the mix and it gets exponentially more complex because of all the data-centric requirements. For the record, Dave's answer was that the focus shouldn't really be on having a central policy store or management point. It's more about having the right processes occurring between the IAM products to ensure the correct policies are in place at the points where they need to be applied.

Overall, I think CA's got the right idea in terms of strategy. Whether their products are able to deliver remains to be seen. They've got some serious integration work to do so they can get a more coherent story out there and have products that deliver on the promise they are showing. CA does have a trump card to play that their competitors don't have (yet), and that's the DLP product. As I've said before, identity and data security go hand in hand.

Monday, April 06, 2009

Why do large companies like helping phishers?

It was stupid bank behaviour that compelled me to start blogging a few years ago. I've also noted the questionable ways which banks in general deal with customers from a security standpoint (although my bank's recently cleaned up its act somewhat).

Its not just the banks that help to facilitate phishing scams with their antiquated, unsafe processes when dealing with customers. Almost every large institution that holds personal data does at least one thing in an unsafe, insecure way. And almost every organisation that has a call centre forces us to divulge personal (and often account security) details to the person we speak to on the phone before they are "allowed" to access our account. This is not particularly safe either as the person on the other end of the line could very well take down the details and use them later (aka the most common argument against offshoring call centres). But there is a certain level of trust because most of the time, we call a number that we have used in the past and know with 99% certainty belongs to the organisation we mean to be dealing with. It's also the way the process works and we have learned to live with it despite the flaws.

A large amount of the blame here lies in the imbalance when dealing with personal information and the organisations we provide them to in return for a service. Companies have way too much power (and consumers little or no control) when it comes to our information, but I'm going off-topic here as I don't mean to be talking about Vendor Relationship Management (VRM). Back to the topic at hand...

I recently contacted my mobile service provider (from here on in, known as "Stupid Phone Company (SPC)") to change something about my account. First of all, the IVR system made me authenticate myself before patching me through to the warm body at the other end of the line who then proceeded to ask me exactly the same questions I had just provided to the system. I wasn't in the mood to rant at the person as they weren't to blame. They were simply doing their job. Process fail number 1: Why bother having the IVR system waste my time and authenticate me when the fool at the other end of the line is going to ask me the same thing again SPC?

In any case, the person couldn't help me. They said I had to notify the company in writing either via snail mail (what decade are we in SPC?) or via a form on their website. I took the online form option and didn't hear back for a few days.

Today, I received this in my inbox:
"Hi Ian,

Hope you are doing fine.

I’d like to help you Ian, however; for this I will need to access your account and currently I am unable to access your account due to security reasons.

In order for me to access your account and check the details on your account, please confirm the security details given below:

PIN (1st and 2nd digit)
Full address with postcode
Date of birth
Method of payment

I assure you I'll be able to sort this out as soon as I receive this information.

I look forward to your response.

Kind regards,
(Name redacted)"

At this point, the only form of assurance I had that this came from a legitimate source was the "from" address in the email header. This however, isn't exactly difficult to fake (as my first year University lecturer demonstrated to us in ohhh, week 1 of "Computing101"). In other words, I have no assurance that it's from SPC. In fact, it even reads like a phishing email.

Being the paranoid security person that I am, I picked the phone up and called customer service to validate that they had indeed sent me an email and to double-check the email address I had to send the reply to. After questioning the poor customer service person and eventually getting them to agree that this process is ridiculous and insecure, they still insisted they could not get around the process and that this was the only way of getting my issue resolved because my request could not be met over the phone.

So it seems that this is the standard procedure when one fills in an online form with this company. In which case, they are exposing their customers to a security nightmare by building phishing-like behaviour into a standard procedure that all their customers will probably need to use at some point. Did you hear that SPC? Your BAU process is the same as the one phishers use!

I've actually visited this company in a professional capacity (in one of my previous jobs) and can confirm they do indeed have a security procedures and operations department. In other words, "we don't pay people to think about these things" is not a viable excuse. Someone there needs to be fired (and it's not the customer service department).

Thursday, March 19, 2009

What does an IBM acquisition of Sun mean for Identity Management?

IBM employees: hands up those of you expecting to tell management to stick their redundancy packages where "the Sun don't shine"?

Sun employees: hands up those of you who walked into a meeting this morning and came out to be greeted by people with spray cans and paint tins eager to paint you IBM-blue, itching to call you a smurf?

In case you've been in a cave today, the rumour ("rumor" for my American friends) doing the rounds is that IBM is in talks to acquire Sun. I should stress that is a rumour, but I suppose everyone thinks the fact that the Wall Street Journal is one of the news outlets reporting on this rumour gives it some additional weight.

I wasn't going to bother writing anything given that nothing has actually happened and I'm not sure how this is a no-brainer move for IBM, but a few people have emailed asking what I think. So the easiest way to respond was to post this.

There's no shortage of coverage across news outlets, blogs and in Twitterville. Everyone's talking about the big picture. Larry Dignan (thinks it makes sense) and Dana Gardner (doesn't think it makes sense) have more insightful commentary than most stories I've read. Commentators generally mention data centers, servers (i.e. hardware), cloud computing, professional services, Java, IDEs (NetBeans vs. Eclipse - consensus opinion seems to think NetBeans will go the way of the Dodo), Unix (AIX vs. Solaris) and open source. Many of them are saying that it makes sense in a macro-company kind of way. I however, will be focusing on a specific something else where I don't think it makes any sense at all. Then again, in the grander scheme of things there's usually some sort of sacrifice when these things happen, especially today when the flavour of the microsecond is all things cloud-related and not un-sexy-enterprise-off-the-shelf-run-it-in-your-own-data-center software.

My point is that very few reports have touched on something that should be on your mind if you work in enterprise software: what's going to happen to the software stack? There are overlaps EVERYWHERE! There are too many products to talk about in detail but IBM cannot simply throw Sun's stack away because of the backlash they're going to get from customers and the community at large.

If IBM does acquire Sun, they sure as heck aren't doing it for the software (except for perhaps additional "control" over Java). And they sure as hell aren't doing it because Tivoli's run out of role management vendors to acquire and liked VAAU (which became Sun Role Manager) so much they went to Sam Palmisano and told him to buy Sun as punishment for getting to VAAU before them. Does this mean they'll just throw Sun's software division away? Of course not! That would be stupid on IBM's part (and despite what I've written about in the past, I don't think IBM are stupid). They will more than likely run everything separately initially, figure out what bits and pieces fill missing holes in the IBM software portfolio and then "blue-rinse" (rebrand) them. The overlapping pieces will be absorbed into the IBM blue-ether and have useful components re-used within existing IBM software and the perceived useless bits discarded. It's IBM's modus operandi (just look at what they did with their DB2-related acquisitions). It's also what Oracle does, so at least someone else thinks it makes sense.

And here's where I'm going to head down the rabbit hole, because this is all based on a rumour. In other words, it's speculation and anything said is simply mental masturbation.

The least affected IBM software brand will be Lotus. Rational should be relatively unscathed. The other three IBM software brands (Tivoli, WebSphere, Information Management aka DB2) however, will notice a few changes. None will be affected more than WebSphere, but Tivoli comes a close second in the upheaval stakes. This is where the IBM's Identity and Access Management (IAM) suite sits, which is what I'm going to focus on now.

The first win for IBM will be in the marketing stakes. I don't mean this in terms of positive karma or PR, but more in terms of the marketing talent at Sun. This is because Sun has been better at marketing, community building and listening to customers than IBM has within the IAM space. Now, assuming IBM doesn't fire the whole IAM marketing team they'll be inheriting a very strong team of people (yeah I know their engineers aren't too shabby either). In my opinion, Sun understood the evolution in marketing that's been occurring much earlier than IBM and hence are ahead in the game from this standpoint. Actually, pretty much every other big IAM vendor understood this before IBM. In IBM's defence, they are starting to pick up their game and are running with it wholeheartedly.

On to the products. I thought of doing a full comparison by listing each company's full list of IAM products, but then I started writing down IBM's list from the website (in case I missed anything by relying on my memory) and it gave me a headache (Side note to IBM: WTF?! The list has gotten much more complicated and longer. And to add to the confusion, you even list "products" that are actually solutions made from combining different underlying products. If you are able to give an ex-employee who used to architect, implement and sell this stuff for you a headache when going to your website, what do you think customers are going to think? Or maybe I just don't have the mental capacity to read introductory product information about IBM software). Conversely, Sun's list is much easier to follow (although whoever runs the website should probably place Access Manager and Federation Manager in a separate list noting that they've been combined to form OpenSSO). Here's the core Sun IAM list with commentary:
  • Sun Directory Server - IBM has Tivoli Directory Server.
  • Sun Identity Compliance Manager - IBM does not have a direct equivalent.
  • Sun Identity Manager - IBM has Tivoli Identity Manager.
  • Sun OpenSSO Enterprise - IBM has Tivoli Federated Identity Manager and Tivoli Access Manager for e-business (which is actually used as a component within the Federated Identity Manager product, but I won't complicate things here).
  • Sun Role Manager - IBM does not have a direct equivalent.
Thanks to Sun's simpler list, there's a relatively clear picture to work with. I should note that IBM has quite a few more IAM products that I've listed (IBM lists them as part of the Security Management suite), but I'll ignore them because a potential acquisition of Sun should not affect them too much.

What's abundantly clear here is that Sun Role Manager and Sun Identity Compliance Manager (don't confuse this with Tivoli Compliance Insight Manager because the IBM product addresses different requirements) look to be safe from the chopping block. IBM will simply take the 2 products (aside: my understanding is that Compliance Manager is actually derived from Role Manager - Sun people, please correct me if I'm wrong) and "blue-rinse" them. Their names will likely stay the same with "Sun" being replaced with "IBM Tivoli". Either that or IBM will combine them and call it "Tivoli Identity, Access and Role Compliance Manager" or some long-a**ed name that forms yet another T-acronym. At least you can kind of pronounce TIARCM, albeit getting tongue twisted in the process.

As for the other Sun IAM products, their futures are at risk if this rumour proves to be true. IBM's spent shed-loads of money acquiring, "blue-rinsing" and subsequently developing their equivalent products. It's VERY unlikely that IBM will throw that investment away only to repeat the exercise again with Sun's stack. In other words, I have a feeling that in the longer term, Sun Directory Server, Sun Identity Manager and Sun OpenSSO Enterprise are seriously in danger of being "sunsetted" (yeah, I cringed too when I typed it). Interestingly enough, many people are of the opinion that Sun's Identity Manager is a superior product to Tivoli Identity Manager. Conversely, the reverse is true when comparing Federation/Access Management products. Opinions such as these are of course subjective and depend on the requirements at hand and people's personal preferences. The truth is that they are all pretty solid, mature products in their own right so there's no easy answer in making a decision to pick Sun's version over IBM's or vice versa. I see 3 logical possibilities here:
  1. IBM "sunsets" the relevant overlapping Sun IAM products, which will mean that they'll continue to support existing customers but gradually migrate them over to the Tivoli versions.
  2. IBM markets the Sun IAM products as open source alternatives to their enterprise incarnations.
  3. IBM re-hashes the rather unsuccessful "Express" line of products.
Option 1 will be the least popular alternative in the eyes of customers. But it means BIG services opportunities for IBM and IBM's channel of business partners which provide consulting/implementation services. From an IBM perspective, they would be making the sacrifice early on for the greater good of the company and taking the PR and initial professional services (in having to give away free services for the migration to prevent angry mobs from gathering) hit that comes with it (like they had to do when they acquired Encentuate). This is the "rip the band aid off quickly" approach, but it also means lots of job cuts with the sales and marketing teams being first out the door.

Option 2 is the easy way out, but is also the most expensive. Sun already markets their product line as being open. The heavy-lifting part of the marketing's been done and all IBM has to do is see it through while changing the product names. Unfortunately, this is expensive from an ongoing operational and development standpoint. They may choose to absorb the cost as a "good karma tax", so this option could very well fly. The upheaval to existing Sun teams and customers would also be mitigated. This is the "don't rock the boat" option.

Option 3 is the "marketing blue-rinse" option. It's more or less a hybrid approach of options 1 and 2. IBM will be looking to cut the fat somewhat from a jobs perspective, but not as drastically as they would if they went with option 1. From a technical standpoint, this will be very similar to option 2. The difference is that they bring the products back in-house and promote them as the "light IAM options" for small to medium business. This was exactly the target market for their Express initiative and they may look to re-energise those efforts . Ironically, Tivoli Identity Manager Express was a response to the market perception that Sun Identity Manager is easier to deploy and manage. If this happens, I don't think the Sun products will survive beyond a year or 2. IBM's Express experiment has proven that customers that buy Tivoli still like to choose the heavier version "in case" they need the features and perceived superior stability. Remember, this is not to say the Sun products aren't stable or fully featured. I'm just saying that in this instance, that's what the marketing materials are going to imply and how the sales teams will be selling the products. If not, IBM would look pretty stupid for continuing development on 2 equally good products in parallel that serve the exact same purpose (in the eyes of the customer). If "Express" doesn't sell, this option is simply the less painful, more drawn out, more expensive version of option 1.

No matter which option IBM picks, one thing is certain. They're going to run a fine-tooth comb over the Sun product set, pilfer all the useful bits and roll them in to the existing Tivoli product set. This is good for Tivoli customers but it'll take time for the functionality to start appearing given the speed that IBM moves at.

I don't think competitors like Oracle, CA and Novell will be quaking in their boots though. From an IAM standpoint, any acquisition only increases IBM's market share. It doesn't really give them a big advantage when it comes to product features or functionality. Then again, significantly increased market share is nothing to be sneezed at.

If the rumour proves to be based on solid information and something does happen, the real winners (other than IBM) will be existing IBM customers. The biggest losers? Existing Sun employees and customers, at least from a software perspective.

Friday, March 13, 2009

IBM gets more end-pointy

To be specific, I should say IBM ISS. This time, they're getting in bed with with BigFix (the press release is here). Here's the first paragraph of the release:
"Today, IBM announced a first-of-a-kind endpoint security offering, IBM Proventia Endpoint Secure Control (ESC), that is designed to enable enterprises to escape from the constraints of vendor lock-in and to enhance endpoint security, compliance and operations at a lower cost. This new endpoint security offering is delivered by IBM Internet Security Systems (IBM ISS) leveraging IBM's depth in security experience and technology from BigFix, Inc. for endpoint security management."

It sounds like it's some sort of OEM agreement with BigFix to offer up security-focused, endpoint systems management. Essentially, it's to allow for organisations to manage all the bits and bobs of software that end up having to be deployed on endpoints (laptops, desktops etc.) and become a nightmare to manage over time. IBM harps on about "vendor lock-in" and stress that having ESC/BigFix in place makes it much easier to swap out software and replace it with new stuff (McAfee AV with Symantec's, for example). Sounds nice in theory and marketing slides. Not so simple in reality, even with a shiny new toy.

I won't get into the minefield relating to it being a good idea to have some sort of common security policy management or decision point across everything (which is what Symantec and McAfee are trying to do across their bag of toys) that this doesn't address, but I'm sure IBM are working on that. By the way IBM ISS, the boys at Tivoli might have some stuff that you could use? You should try talking to them...which brings me to my next point.

I can't help but notice that there's some level of overlap with what IBM Tivoli provides in the way of their systems management software, but this is IBM so it doesn't surprise me that the left hand doesn't seem to be talking to the right hand. It's business as usual and somewhere within IBM, a bunch of people in Tivoli are going to be wondering why IBM ISS keeps trying to compete with them. To be fair, the IBM Tivoli stuff isn't as endpoint-focused when it comes to security and isn't as security-focused when it comes to endpoints (this is confusing unless you know the Tivoli products - you IBM Tivoli people know what I'm talking about don't you). The press release does make a reference to Tivoli:
"The new tool will complement IBM Tivoli's operational desktop management offerings with robust endpoint operational security solutions, allowing customers the ability to address end point security. IBM Proventia ESC will also provide key endpoint security audit data to IBM Tivoli Security Information and Event Manager (TSIEM), further strengthening TSIEM's enterprise-wide compliance reporting capabilities."
But that statement sounds to me like it was thrown in to "keep Tivoli happy". TSIEM could get its endpoint security audit data from any other competitive endpoint source. It doesn't need ESC specifically! Of course, the marketing department will throw in comments like it'll be better integrated and have "out of the box connectors" but we know how true these things are. Unless development is managed by the same brand, this is extremely difficult to achieve in an adequate amount of time. My money's on the fact that the implementation partner is going to have to be the one that picks up the pieces if/when the integration at a client's site is required.

Strategically however, this move makes sense. If your memories go back to late 2007 (yeah I know that's quite some time ago), you may remember IBM ISS dipping its toe into data security by offering managed services using a combination of Verdasys, Fidelis and PGP software. I'm not sure they got very much traction out of that initiative, but this is a continuation of an increasing focus on the endpoint by IBM ISS, and they want to manage it all too:
"'The killer application in endpoint security is management,' said Dan Powers, vice president of business development at IBM Internet Security Systems."
I don't really agree that management is "the killer app" in the endpoint game, but it's certainly a key piece. The likes of Sophos, Symantec, McAfee, Checkpoint have all been progressively coming out with their own versions of "one agent to rule them all" and wrapping a management layer around it all. I suppose IBM ISS didn't want to get left behind because when it comes to data security, if you ignore the endpoint you've lost the game.

Monday, March 02, 2009

Did IBM actually listen to me?

Or was it a coincidence? I'm not sure because I never did hear back from anyone within IBM in response to my open letter.

The letter I speak of was a rant where I openly asked IBM why they thought it was appropriate to list member email addresses on their newly created communities site by default and not allow for an opt-out. What they really should have done was to set all details as private by default and allow people to opt-in with regards to their details being made public. The fact there was not even an opt-out in relation to email addresses being displayed was unacceptable in my opinion.

I've been away for the past week snowboarding in the French Alps (I just had to throw that bit of detail in - curse me if you must) so I've been a little bit out of it. In trying to "plug" myself back into society, I decided to have a look at the IBM communities site for a laugh. I even contemplated posting my rant to the forum due to their lack of any response. But to my surprise, I noticed something different: email addresses are no longer displayed!

I don't seem to see any changes in being able to set privacy controls, so the interface is exactly the same. But some educated individual's either decided that public emails were a bad idea or they read my rant and did something about it. Makes you all warm and fuzzy doesn't it.

In other news, I'm still getting a shed-load of spam to the email address that IBM made public. Thanks IBM.

Thursday, February 19, 2009

Don't give them a reason to fire you

Someone I know is currently being "done" for potential data theft. I should note the potentially biased and subjective view on my part given I know this person but I'll try to maintain some level of objectivity.

This person is currently suspended and under investigation following an incident. What did this person do wrong? Their mistake was simply to be ignorant rather than intentionally do anything malicious.

Those of us in information security know better than to copy and print a bunch of stuff (unless there's a solid business justification to do it) because any organisation with an adequate security team will start wondering what the heck you're doing. Unfortunately, most people are not in the information security profession so it's not as "common sense" as we may all think it is.

This person decided to talk to me because I'm the "security guy". They asked me what they should do. My answer was to just tell the truth because they weren't actually trying to do anything malicious. This person didn't even know how to write files to a DVD a few months ago let alone know what computing-related activities could be deemed as inappropriate.

Consider these points:
  1. Most people in the world are not security geeks. Heck, most aren't even technology literate to the standard most of us assume is in place.
  2. If employees are simply told what they should not be doing and that they can potentially be monitored, they will be more wary about being perceived as doing the wrong thing. In other words, they will be more vigilant about their behaviour because they've been educated. It also means that companies will have more resources available to catch the people actually trying to do the wrong thing.

I won't divulge too much about the incident for obvious reasons (so any specifics regarding the actual incident stops right now) but I do have an opinion on what the root cause of this incident was:

Lack of security awareness training on the part of the organisation in question.

This organisation that this person works for is a large multi-national. I won't mention what industry but it's one where there's copious amounts of sensitive data lying around. So it doesn't surprise me that they have some level of monitoring in place. It is the responsibility of each and every organisation to make sure employees are properly trained in a basic level of information security awareness. It's not good enough that it says what they should not be doing somewhere in the fine print of their employment contract because no one ever reads those things.

In other words, when an employee is being pulled up for a potential security incident, it is the organisation's fault if they did not ensure that all employees were made aware of adequate behaviour when dealing with company data and information. For those wondering, no this person's soon to be ex-employer does not do information security awareness training. I'm guessing it's because employees cannot make the company money while they are being trained in non-core business related activities.

There's actually an additional dimension to this whole episode. Like many organisations in the world today, they are undergoing a round of redundancies. This person was probably already on that list (this is what they tell me), but now it's even worse. They are suspended pending the investigation. And there's no doubt that if this person is cleared, it's "bye bye" anyway.

The problem is that as with any redundancy, there is a severance package to make things slightly easier. But guess what someone dismissed for misconduct gets? That's right, absolutely nothing. So now the situation is worse. Not only is this person being made redundant, they will probably not get their redundancy package because of this so called "misconduct". Way to go big corporation. Turn up the investigations and justify not paying people by saying they attempted to steal a bunch of sensitive information.

When people are scared about being put on the chopping block, what's their first instinct? That's right, they go through their systems and back up all their personal things. We can argue that there should not be any personal things on company assets, but it gets very difficult especially if you have worked for a company for some time.

Ones with more sense don't touch any company material. But there are those who aren't actually trying to steal anything but simply think certain documents are useful to have in their "kit-bag". You know what I'm talking about: "cheat sheets", document templates and the like. This is all too common an occurence and is actually part of the reason there's a so-called "data leakage" industry. It's commonly termed "inadvertant leakage of information". The assumption that it's not harmful to copy certain things if there is no malicious intent is an incorrect assumption on the employee's part, but they don't know any better. Once again, education and awareness.

The more common problem lies with the information where there's a fuzzy line between whether something is work-related or not. Generic education materials are a good example. The company could argue that it's their property even if they did not create it, but individuals typically want all the education they can get their hands on. After all, they're going to potentially need it to find their next job. If you're not sure, just don't take it.

While companies are busy investigating innocent (but rather ignorant) individuals for potential data theft, people with more malicious intents could possibly get off unscathed because there may not be adequate resources to investigate them or even notice they've done anything. In fact, if someone has malicious intent, they probably did some level of planning to make themselves more difficult to catch. So companies don't have any chance of discovering anything's happened if they don't have adequate resources available. And if they are off investigating anyone who printed anything or copied some files to a USB drive, they are going to run out of resources rather quickly.

Doesn't really matter though Mr. CEO, does it? Think of all the money you're saving by not having to pay out these pesky redundancy packages!

As for this person, the investigation is pending. They have told the truth, handed everything over and now just has to wait. Don't make the same mistake because of ignorance.