Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn the coveted Fabric Analytics Engineer certification. 100% off your exam for a limited time only!

lbendlin

Working with Power BI Gateway logs

Power BI gateways serve in an important role - they bridge the data gap between the Azure service that drives Power BI online offerings like browser based reports and dashboards, or user access to XMLA endpoints for example through "Analyze in Excel".

 

Without gateways the Azure service would not know how to talk to your on-premise data sources (like SQL server in your company network). Gateways receive requests from the Azure service, process them, and return the results to the service.

 

If you have a personal gateway, or a single instance of an Enterprise gateway then live dangerously - your personal gateway may be installed on a PC that is not always on, or your single instance Enterprise gateway might fall offline for a variety of reasons.

 

If you are concerned about business continuity (and you should be) then you want a gateway cluster with multiple gateway instances (ideally in different geographical locations) combined together into a gateway cluster. That way if one of the instances falls over, the other instances can continue to serve the data requests without much impact.

 

Setting a cluster up is one thing, measuring its efficiency is something else entirely.  How do you know if a gateway is part of your slow refresh problem or not? Can the gateway handle all the requests for refresh during a regular week? If so, can it do so barely, or with plenty or capacity to spare?

 

One way to look at this is to examine the gateway logs.  In the default configuration these logs are collected automatically both for general information and for errors.  The info logs tend to be rather verbose so the gateway only keeps them around for a day or so. Errors are (hopefully) less frequent and you may see two weeks or more of logs.  

 

Logs are stored in the "appdata"  folder of the account running the gateway service.

lbendlin_0-1599493747341.png

 

You can also enable advanced logging (you need to restart the gateway service for that) for even more verbose logging and even shorter retention periods. You only want to do that for short periods of time when asked by Microsoft to provide gateway logs as part of the investigation of a ticket you raised.  Clicking on the "Export Logs" link will basically pack up all the text files into a ZIP archive.

 

lbendlin_1-1599493931017.png

Giving the log data to Microsoft is not optimal though.  They are investigating the issues with one of your dataset refreshes but you may have hundreds of other refreshes going on that are unrelated (and potentially contain senistive information).  Collecting the gateway logs manually also gets old very quickly if you have more than just one or two instances (as you should, for business continuity).

 

What if we could examine these log files manually? Can't be that hard, right. These are just text files after all.

 

lbendlin_2-1599494309356.png

This is where it gets ugly, and quick.  Log files are more like a bunch of stream of consciousness flow of random thoughts scribbled onto a napkin on a wet table in a stiff breeze.

 

Well, maybe not exactly, but as you can see these log files certainly lack one thing - structure.  And they have a lot of dead weight, rows and rows of text that seemingly have no information.

 

There is no documentation (that I know of) on the structure of the log files. From my observations they can have as few as a single column and as much as eight columns of data, with a lot of these columns filled with seemingly meaningless identifiers for processes and subprocesses.

 

lbendlin_3-1599494718651.png

There are some identifiable pieces - timestamps, message types, and the occasional clear text of  an error.

 

lbendlin_4-1599494838218.png

 

The best I could come up with is a Power Query franken-script to try and tame the data and only let the information through.  I am pretty sure that I am misinterpreting some of the data, and missing other important items.

 

Here is the script ("GetLog") to process an individual text file. Note that I chose to add an index to get  some idea of the sequence of events as often there are many events with the exact same timestamp. Also note that the actual useful information can either be in column 1 or column 8 so I add an "action"  column that pulls the data accordingly.

 

 

 

(Content) => let
        Source = Csv.Document(Content,[Delimiter="#(tab)", Columns=8, Encoding=65001, QuoteStyle=QuoteStyle.None]),
        #"Added Index" = Table.AddIndexColumn(Source, "Index", 0, 1, Int64.Type),
        #"Added Conditional Column" = Table.AddColumn(#"Added Index", "Action", each if [Column8] = "" then [Column1] else [Column8]),
        #"Added Conditional Column1" = Table.AddColumn(#"Added Conditional Column", "Timestamp", each if Text.StartsWith([Column1], "DM.") then Text.Middle([Column1],Text.PositionOf([Column1],":")+6,28) else null),
        #"Removed Columns" = Table.RemoveColumns(#"Added Conditional Column1",{"Column1", "Column8"}),
        #"Replaced Value" = Table.ReplaceValue(#"Removed Columns","",null,Replacer.ReplaceValue,{"Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Action", "Timestamp"}),
        #"Filled Down" = Table.FillDown(#"Replaced Value",{"Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Action", "Timestamp"})
    in
        #"Filled Down"

 

 

 

Then you can choose to treat Info logs and Error logs differently  (although they have the same structure) and focus on certain action types or  content.

 

Here is an exmple of how to read all the Info files from a gateway cluster member ("GetInfo"):

 

(Machine) => let
    Source = Folder.Files("\\" & Machine & "\c$\Users\PBIEgwService\AppData\Local\Microsoft\On-premises data gateway"),
    #"Filtered Rows" = Table.SelectRows(Source, each Text.StartsWith([Name], "GatewayInfo")),
    #"Invoke Custom Function1" = Table.AddColumn(#"Filtered Rows", "Transform File", each GetLog([Content])),
    #"Expanded Transform File" = Table.ExpandTableColumn(#"Invoke Custom Function1", "Transform File", {"Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Index", "Action", "Timestamp"}, {"Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Index", "Action", "Timestamp"}),
    #"Removed Other Columns" = Table.SelectColumns(#"Expanded Transform File",{"Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Index", "Action", "Timestamp"})
in
    #"Removed Other Columns"

 

 

Now all that remains is to provide a list of machines (your gateway cluster members) and to run the log collection across the entire list. 

 

 

let
    Source = Gateways,
    #"Filtered Rows" = Table.SelectRows(Source, each [Active]=true),
    #"Invoked Custom Function" = Table.AddColumn(#"Filtered Rows", "GetInfo", each GetInfo([Gateway])),
    #"Expanded GetInfo" = Table.ExpandTableColumn(#"Invoked Custom Function", "GetInfo", {"Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Index", "Action", "Timestamp"}, {"Column2", "Column3", "Column4", "GatewaySessionID", "ClientPipelineID", "Column7", "Index", "Action", "Timestamp"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Expanded GetInfo",{{"Timestamp", type datetimezone}, {"Index", Int64.Type}})
in
    #"Changed Type"

 

 

Now we can run reports about gateway cluster utilization over time, and we can validate that all gateway cluster members do their part in processing the service requests.

 

Here is an example of the on-ramp of requests against the four cluster members of one gateway on Monday. Note that the primary (anchor) cluster member failed to restart on Saturday. The other three cluster member continued on, and on Monday I swapped in a new primary member.

 

lbendlin_0-1599496747889.png

Gateway error logs help to identify connections where either the credentials have expired, or where the data source is offline.

 

lbendlin_1-1599496840861.png

 

 I have found that gateways distribute data source refresh requests over multiple cluster members, even for the same dataset. That's actually pretty impressive. After lots of searching I also finally found how to connect the gateway logs back to our tenant audit logs - the dataset Id is included in the "objectId"  rows of the log files.  Adding a calculated column

 

Dataset = right(left(GatewayInfo[Action],51),36)
 
then allows me to tie the logs into the data model.
 
 
lbendlin_0-1599497184136.png

 

Now I can directly identify the developer I need to work with to correct a connection issue, or any excessive refresh requests that consume all the gateway resources.

 

None of the above is simple.  I wish that in the future Microsoft would provide more information on the structure of the gateway logs,  more controls on what to log, and ideally some real time gateway performance monitoring tools that would make the described process obsolete.

 

 
Comments

@lbendlin  Could you provide the community a sample file? Transformations on errors seem difficult to us.

How do you define active?

How do you define a valuable error preview? 

Thanks a lot!

Jasper Baert

Hi @lbendlin 

Thank you for this awesome post. 

 

Have you figured out what GUIDs in Column2,3,5,6 and 7 mean?

 

Thanks a lot.

Best regards.

Anonymous

@lbendlin How can we get the list of machines (the gateway cluster members) ?

Thank you for this article well detailed