One fine day, as I sat dutifully watching messages on MS Teams channels, a notification popped in. There was an ominous looking footnote, something about Microsoft deprecating something soon. So I followed the trail and found out that:
- Microsoft had decided to withdraw support for the Office 365 connectors that we were using to send messages to our MS Teams channels.
- The thing was going to happen soon.
I did some preliminary digging. We needed to replace existing Office 365 connectors with something else. The officially recommended solution was to use Power Automate workflows. However, before jumping into official docs, I decided to conduct a high-level survey of the landscape and see what the dev-ops community was doing about it.
The overwhelming sentiment of the people at the receiving end was negative, to say the least. Criticism of Microsoft went beyond the usual non-specifics to detailed expositions of how the time was too short and the instructions were too vague or non-existent. One frustrated user, Andrew Lightwing, had this to say:
A three month deprecation window over the summer holiday period in which we have to implement a replacement solution whose documentation either amounts to “Now draw the rest of the owl” or outright 404s? This is not reasonable.
Okay, so the battle lines were already drawn.
Despite the colorful commentary on the internet about the issue, I could actually create the new workflows fairly easily and they worked reasonably well. Since our team does not have many channels to deal with in the first place, I was able to successfully suppress my natural instinct to automate everything and use the GUI in MS Teams to get the task done.
But then, for reasons not immediately apparent, the messages from our lovingly arranged Alertmanager failed to arrive. On the surface, everything looked fine. The Kubernetes clusters were good, Prometheus was busy, Alertmanager pods were up and running. The configmaps showed the updated webhooks.
Attempts at debugging the Alertmanager were unsuccessful. For some reason, we were using the services of a third-party tool to collect messages from Alertmanager and pass them on to MS Teams. This tool had, as I found out, already proclaimed to the world at large that it would hereafter resist any attempts at being maintained. So we were taking that Docker image from times long gone, saying a little prayer, and then mixing it into our Kubernetical soup, hoping that it would not topple everything.
As any self-respecting coder will tell you, ancient Docker images with mysterious insides are not best-practice, unless of course you are a shaman yourself. The Ancient One held all the power and did what it wanted to do and was impossible to debug. How does one debug a mysterious deity who won’t throw even a crumb in the way of system or application logs, anyway?
Since I could not debug the Alertmanager per se, I decided to approach the problem a little more dramatically. In a bid to determine what was happening, I created chaos. I prodded, I poked, I deliberately sabotaged and crashed containers all over the place, and yet, no alerts materialized. A quick look over in the Power Automate workflow runs showed that no messages were reaching that service. The blame lay, squarely, with the Alertmanager, even though it was impossible to pinpoint exactly what was going wrong.
This meant re-implementing the Alertmanager, which kind of led me astray from the primary issue. However, after the dust settled, we had a brand new implementation of the thing and the Ancient One was banished forever. In this new config, the receivers were set up in this manner.
receivers:
- name: 'msteams-workflow'
msteams_configs:
- webhook_url: 'to be provided using values.<environment>.yaml'
send_resolved: true
title: 'to be provided using values.<environment>.yaml'
This is only a part of the alertmanager.yaml file that the setup uses, but the rest of the yaml is pretty much standard and out of scope for this particular blog. The only thing I really want to highlight is the msteams_configs bit, because we wanted alerts to reach our MS Teams channels.
After the reimplementation, testing in a dev cluster still failed to generate any alerts in the MS Teams channels. However, now I had a better handle on what was going on in our Kubernetes clusters, alerting-wise. Tracing the path indicated that alerts were indeed being sent out and the new Power Automate workflows were indeed receiving them.
To put it bluntly, the Alertmanager was sorted out, but the basic problem remained unsolved. It had only moved to a new address.
Power Automate was where the train was derailing now. The workflow politely declined to send the message along any further, resolute in its refusal much like the bouncer of a swanky nightclub, muscular arms crossed and lips tightly pursed. The debugging information provided was about as eloquent as the aforesaid bouncer would yield in similar circumstances.
I had no choice but to offer a few thumbs-downs as a token of my criticism of the whole situation, and get back to the wild of the internet, searching for solutions. There was more digging and hanging out in GitHub Issues backlanes— GitHubbing — as I like to call it. Eventually, I came across a kindly person with a heart of gold, who had offered what he called a crude workaround.
It was exactly what the doctor ordered. The workaround not only happened to be effective (thanks The-M1k3y!), but it also put the spotlight right on the source of the problem.
Alertmanager packs and sends out the alerts, but these are not nicely gift-wrapped. At least, not nicely enough for the Power Automate workflows to pass them on as Adaptive cards. The workaround rightly identified this as the thorn in everybody’s collective side and showed that an extra step needed to be added to the workflow, between receiving the input and passing it on to MS Teams.
This gift-wrapping step is the Compose Action in the Data Operation menu. It intercepts the message sent by the Alertmanager and repacks it, so that the output is of the form that can be used as an Adaptive Card.

The input for this Compose action is the incoming message. The output is a valid Adaptive Card structure. Elements of the incoming message can be parsed and placed at appropriate places. As a very basic example, I refer you to the aforementioned workaround.
[
{
"contentType": "application/vnd.microsoft.card.adaptive",
"content": {
"body": [
{
"type": "TextBlock",
"weight": "Bolder",
"size": "ExtraLarge",
"text": "@{triggerBody()?['title']}"
},
{
"type": "TextBlock",
"text": "@{triggerBody()?['text']}"
}
],
"msteams": {
"width": "Full"
}
}
}
]
For some reason, the latest version of Adaptive Card was not playing well here, so I settled for version 1.2.
Incorporating the extra Compose step did the trick and I was able to defeat the swanky nightclub bouncer, as it were. The alerting messages began to be forwarded to the appropriate channels.
All that now remained was to clean up the presentation of the alert messages a bit. One of the things I wanted was to have red lettering for the title when alerts were firing and green text when the alerts resolved.
This is what my Compose step looks like at the moment. Apologies for using a screenshot — the editable text was not formatting well at all.

Notice that there are three functions in there. Since some of these contain confidential information, I reluctantly omit details for those. But I can share the color coding function that I wrote.
if(contains(toLower(triggerBody()?['text']), 'resolved'), 'good', if(contains(toLower(triggerBody()?['text']), 'firing'), 'attention', 'default'))
Neat, isn’t it?
Here’s what part of the card now looks like.

With this relatively straightforward (in hindsight) and yet completely undocumented (at least as far as official docs go) fix, things are back to normal and I am able to go back to peacefully watching the MS Teams channels. At least for the time being.
Till something else pops.
“You should have used a generative AI model and solved the problem in minutes.”
I hear you, and it is difficult to avoid generative AI today when Large Language Models are falling over each other to help you out. So, full disclosure, I did confer with several of these LLMs during the process. However, this was a new problem caused by hitherto unseen changes, beyond the training data of these models, so the nature of my interaction with the LLMs was more in the sense of an exploration of options.
As I glance over the incoming notifications and happily bask in the green of resolving alerts, I reflect on the nature of Generative AI and its role in the near future, and what it means for us. I realize that while it was a great tool to bounce ideas off, the solutions it offered were not ready-to-deploy.
So I guess our jobs are safe. For now.
But the machines are coming…
Hasta la vista
— Dr Saurabh Sawhney
Do read the companion piece to this blog — Sending messages to MS Teams — a python script