Context
Power Automate Desktop flows do not have a native functionality for providing any information about the current flow at runtime. In cloud flows, we have the workflow() expression that returns various useful information that can then be used in consecutive actions and with various connectors to fetch the name, definition and other details of the flow itself, as well as the current flow run and the environment. But there’s no such functionality in Power Automate Desktop.
So, we usually end up hardcoding some of those values, such as flow names or Ids inside the flow, so they can be used for logging, reading external configs, sending reports, etc. It’s not exactly the end of the world, but not great due to how we also need to edit the flow code every time we decide to rename something, as well as how this adds more manual steps if we use templates to create new flows – the hardcoded values cannot really be standardized and need to be modified for each flow.
But the reality is that there actually is a way to do it dynamically. It’s not a native action in PAD, but with some scripting magic, it is quite simple to implement. In this article we describe how to implement that.
The source
Whenever a PAD flow runs on a machine, it is executed by a process called PAD.RobotV2.exe. This process is responsible for runtime and it’s the same for every single flow we run, regardless of whether it runs unattended, attended, or even in debug mode via the designer.
Obviously, what this means is that the process needs to ‘know’ which flow to run. So, it is in fact being initiated with a bunch of command line parameters that define the environment, the flow, and some other details.
Here’s an example of what the outputs of that process might look like when fetched via a simple PowerShell command (when running a sample flow via the PAD designer):

The command line arguments there provide plenty of details about the current session, the flow, the environment and the tenant. And they’re standardized.
Note: In previous versions of PAD, the process was actually called PAD.Robot.exe. I believe the V2 was released with the revamped version of PAD earlier this year that runs on .NET 8 runtime. However, the outputs of the process are exactly the same (at least those discussed in this article), and as such, the descriptions that follow will apply even to older versions of PAD, with the exception of having to target a process using a different name.
The script
When a flow is running, we can read the contents of the process via a script. Native actions currently do not yet exist, but pretty much any of the scripting actions can be used for this. In this article, we share a PowerShell method, but if you prefer something else, it is also possible using alternative methods.
For the purpose of fetching the flow details, the most important argument is the flow Id. It’s an Id and not a name (similarly to how in cloud flows, we also only get an Id when we use the workflow() expression), but we could use it to fetch information about the flow using the Dataverse connector. This will obviously require Premium, but if you don’t have that, this approach wouldn’t help much anyway, as non-premium desktop flows are not really stored anywhere and there’s not much else we can do. But the flow Id could perhaps still be used for logging.
The script itself to fetch all this is as follows:

It fetches the processes, orders them by creation date, so that the first flow that executed is the first in the list (in case there are child flows running, too), fetches the first process, retrieves its details, parses the command line arguments and then uses regular expressions to fetch the specific arguments we want. In this example, it’s the flow Id and the run mode. We really only need the flow Id here, but run mode could also be used for logging. Other arguments are also easy to fetch if needed.
It also does some formatting to the flow Id, because it is actually passed in as a GUID with the dashes removed for some reason. But if we wanted to reference it in Dataverse, we would need to format it as a proper GUID, so we also do that in the script.
It then combines the parameters into an object, converts it to a JSON string and returns it as outputs. This can then be used in PAD after converting the JSON string to a custom object.
Here’s a plain text version of the script for easy copy-pasting:
# This script retrieves the first (oldest) instance of PAD.RobotV2.exe and extracts command line arguments
# Get the processes PAD.RobotV2.exe ordered by CreationDate
$processes = Get-CimInstance Win32_Process | Where-Object { $_.Name -eq “PAD.RobotV2.exe” } | Sort-Object -Property CreationDate# If no process is found, return an empty JSON object
if ($null -eq $processes) {
Write-Output “{}”
return
}# Get the first process (oldest)
$process = $processes | Select-Object -First 1# Get CommandLine arguments
$cmdLine = $process.CommandLine# Get the flowId argument
if ($cmdLine -match ‘(–flowId\s+(\r\n\s+)?)([A-Za-z0-9]{32})’) {
#Format flowId
$flowId = “{0}-{1}-{2}-{3}-{4}” -f `
$matches[3].Substring(0,8), `
$matches[3].Substring(8,4), `
$matches[3].Substring(12,4), `
$matches[3].Substring(16,4), `
$matches[3].Substring(20,12)
} else {
$flowId = $null
}# Get the run mode argument
if ($cmdLine -match ‘(–mode\s(\r\n)?)(.+?)(?=\s((–)|\n))’) {
$mode = $matches[3].Trim()
} else {
$mode = $null
}# Create an output object
$result = [PSCustomObject]@{
flowId = $flowId
mode = $mode
}# Return the output as JSON
$json = $result | ConvertTo-Json -Compress
Write-Output $json
Disclaimer: The script will work any number of flows running at a time. At this point, it is actually possible to have up to 9 flows running at the same time:
- A maximum of 3 as invoked from the cloud as attended flows:
- A parent flow (directly invoked from the cloud)
- A child flow invoked by the parent without the parent waiting for the child to complete
- A child flow invoked by the parent with the parent waiting for the child to complete (this would be the currently running flow)
- A set of the same 3 types of flows as invoked via the PAD console manually
- A set of the same 3 types of flows as invoked via the PAD designer in debug mode manually
If there are any parallel flows running at the same time, the script will return the details of the flow that was invoked first (based on the CreationDate parameter of the process). This works fine in normal parent to child scenarios, where, for example, the script is in a child flow, but we want to fetch the details of the main (parent) flow. But in the extreme cases of testing while other flows are running, it may return unexpected results.
Using it at runtime
The rest of the flow is quite straightforward – we run the script, we convert the outputs to an object and then use them in a Get a row by ID from selected environment (Dataverse) action.

Since the flow Id returned this way is the actual GUID of the flow in the Processes table in Dataverse, we can in fact use it to fetch the one entry, instead of listing rows and then parsing the array.

The response will contain a bunch of information about the flow, including its name, and even its full definition. These can then be used for logging, reporting, reading configs, or even running some analysis on the flow definition.
Recommendation
We highly recommend saving this as a standalone flow that would then be invoked as a child flow by others, when you want to fetch the name of the parent flow for later use. To do that, we would need to add some output variables to actually return the values fetched by the flow.

This will then be available to the parent after the child flow executes. At the very least that should be the flow name, but could also include more, depending on the use case. Some potential ideas for building on top of this solution are:
- Returning the trigger type for logging / conditional runtime
- Fetching the environment name and/or region for logging
- Fetching the solution information based on the flow Id from Dataverse
- Fetching the flow definition based on the flow Id from Dataverse to push it to source control
There are quite a few possibilities to this. Whether or not they are relevant / applicable depends on your use case.