Every now and then we hear that a cipher algorithm has fallen to a new cracking technique. This cascades into a new round of deprecating any ciphersuites that rely on the newly cracked algorithms. Over the years we’ve moved from SSL to TLS, from DES to 3DES, from MD5 to SHA, and so on. The list of items on MQ’s Deprecated Ciphers page grows slowly but relentlessly over time.
But not all of the ciphersuites that IBM has deprecated are broken. So why are they on the list and should we use them? This post will attempt to shed some light on those questions.
Posted in IBMMQ, Security, WMQ Security
Tagged Admin, Architecture, Best Practices, ciphersuite, deprecated, IBM MQ, NULL, Recommended Practices, SSL, TLS, WebSphere MQ Security, WMQ, WMQ Security
A growing number of my clients are deploying IBM MQ on Amazon EC2 instances and a common need that I see emerging is for instrumentation and tooling. When the MQ instance is ephemeral, deploying instances on demand and decommissioning just as suddenly, lots of things the MQ Admin used to do by hand need to be automated. This includes build-time things such as defining objects, run-time tasks like enabling or disabling queues in the cluster, and forensic capabilities such as archiving error logs.
It is this last item that concerned a recent customer. Their main requirement was to ingest MQ error logs in real time, or at least close to it, so those logs would survive death of the virtual host on which they were generated. Getting Splunk to ingest the logs was ridiculously easy. Just define the log files as a Splunk data input and immediately they become available through the Splunk search interface.
That’s all well and good if all you want to do is browse through the logs or search them for particular error codes. To get the benefit of Splunk analytics requires the error logs to be parsed into fields. Then instead of merely searching for error codes you already know about, you can ask Splunk to show you a report of all the error codes sorted by prevalence, by frequency over time, or even which ones are the rare outliers. All the analytic capabilities are usable once the fields are parsed. Better yet, parse logs from many queue managers and now you can spot trends or pick out nodes that are showing early signs of distress. That’s really useful stuff and Splunk provides it right out of the box, but only for log types it knows how to parse. So let’s teach it to parse IBM MQ error logs, shall we?
After I tweeted a link to an IBM blog post on how to start and stop IBM MQ using
systemd, an IBMer responded to say “it surprised me to hear that some #IBMMQ customers have to manually restart their QMs when the box comes up.”
My reply was brutally frank: “Should be no surprise – the serviceability gap with MQ start/stop API resembles an open-pit mine. As a result most shops either don’t do it well, or don’t do it at all. Mortgage payments on the technical debt needed here desperately.”
Not wanting to leave that hanging out there with no explanation, this post describes in excruciating detail what’s wrong. Hopefully, that’s the first step to getting it fixed.
The theme of my sessions at this year’s MQTC (and hopefully also at IBM Think if they are accepted) is cloud and virtualization, if you are reading the abstracts. If you come to the session you find it’s really about designing architecture around configuration management and tools with the specific intent of driving administrative overhead burden and defects down to near zero. So it was a bit distressing yesterday when during the demo a string of errors cascaded across the screen. Unless you are into schadenfreude, in which case watching my live demo auger into the ground might have been fun for you. But in the end, the event more proves my point rather than invalidating it. Here’s why.
My two sessions from this year’s MQTC are posted:
MQ Automation: Config Management Using Baselines, Patterns and Apps
Take the grunt work out of MQ configuration management for virtualization, cloud, and large networks by applying a layered approach. This session will introduce the concept of building an MQ configuration from a baseline, then defining a class of service with a pattern layer, and finishing off with application configurations. This modular approach dramatically improves consistency, quality, and flexibility while greatly reducing cost. In compliance environments it provides a definitive as-specified configuration to which the as-running state can be reconciled at intervals or in near-real time. A basic script framework for implementing this system will be reviewed as well.
MQ Automation: Config Management using Amazon S3
The central server needed to set up an MQ configuration Management system turns out to be a consistent showstopper, but with a few pennies and a few scripts you can use Amazon Simple Storage. This session introduces scripts that automate QMgr builds with a local shell script that queries a flat-file configuration database stored in the cloud. It’s dirt cheap and super simple yet can reduce the time and cost of building MQ nodes while improving quality and consistency.
Note: I created a dedicated user for the conference and am supplying the ID and key in the session materials. Download the slides so you can cut-and-paste the commands to install the AWS metadata files.
In case you hadn’t noticed yet, IBM has quietly changed the format of the stash file so that the various unstash programs no longer work. In this post I’ll discuss some of the security implications of that change and, since I never quite grew up, also channel Sean Penn’s Spiccoli from Fast Times at Ridgemont High and make a lot of stash jokes. As Spiccoli might say, “Dude, IBM broke my stash!”
I’ve added a “Versions” tab to the results matrix, corrected some copy/paste errors, and uploaded new copies of the PDF and Excel versions. Over time as new results are added or corrections made I’ll replace the existing documents so the links do not change. These are active documents so expect changes frequently.
I won’t post updates to the GitHub documents since these will probably be the most active artifacts of the entire project – and because GitHub shows you the complete history. Much thanks to fjbsaper and Josh McIver for updates and edits on the tools.
As of v8.0, MQ now can natively validate user IDs by checking the password against the Operating System or LDAP. Checking against Pluggable Authentication Module (PAM) was added in v220.127.116.11. Prior to v8.0 it was necessary to use a channel security exit to perform password-based authentication over SVRCONN channels. With MQ v8.0 and later, password-based validation is natively supported and integrated with CHLAUTH rules.
This has been a widely anticipated feature so it came as no surprise that implementing it was among the requirements on each of my several most recent consulting engagements. What was surprising however is that over time I noticed that techniques I’d used at one client for combining CHLAUTH with password based authentication didn’t seem to work at the next. The first time I noticed this I wrote it off as having taken poor notes. The second time though led me to undertake a comprehensive analysis on a per-version and per-fix-pack basis.
This post and accompanying materials are an executive overview of the findings and recommendations. More detailed findings will be posted shortly. My priority in this initial publication is to introduce the issues and outline the recommendations for safely using the new features.
Keen-eyed observers will have noticed that the MQ and IIB Knowledge Centers now have a floating “Contact Us” overlay at the bottom-right of the page. There’s a bit of history there but long story short, for a while there was no connection from the KC’s to the tech writer team.
Morag and I have been lobbying behind the scenes to get KC error reporting reinstated. Not long ago there was a web form that pre-filled the URL of the page being reported. Then there was per-page commenting online, then nothing at all. For a brief time Morag and I had confidential internal email addresses with which to report errors and request updates but were advised not to publish them.
So I’m happy that we now have an official means to do so. The new overlay panel opens an email addressed to ibmkc at ibm dot com when you click it, but the URL is no longer captured for you. When I tried reporting something to that email address I received back a human-written reply saying my request had been forwarded to the appropriate team, and copying the internal emails I’m not supposed to give out. Which is what I assume will happen should you submit a report.
Since we are now using a common reporting point for KC updates, I’d recommend a few things to help the routing along:
- Put the product name in the subject line.
- Put the URL you are reporting near the top of the email. Unlike previous versions of the IC and KC that were in HTML frames, the URL in the browser address bar is now kept current as you move from page to page. Copy that and paste into the email.
- The slug (i.e. the “bq28120_” bit before .htm in the URL) is a lot less reliable across product versions due to significant restructuring of the content. Used to be great as a search key, not so much anymore unless you know which KC version it lives in. Please don’t send bare slugs.
- I’m told that it is better to put small, atomic updates in each email rather than combining them. That makes it easier for a tech writer to tackle them than if he or she has to deal with a list of 20 things in a single email.
My lobbying pitch was that the ability of the community to drive improvements back into the docs is essential. There’s no “Missing Manual” book for MQ in part because we’ve all helped fix the manual. I’m very happy that we can do so once again.