This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Operator Manual

How to operate Foyle

This guide covers operating and troubleshooting Foyle.

1 - Monitoring AI Quality

How to monitor the quality of AI outputs

What You’ll Learn

  • How to observe the AI to understand why it generated the answers it did

What was the actual prompt and response?

A good place to start when trying to understand the AI’s responses is to look at the actual prompt and response from the LLM that produced the cell.

You can fetch the request and response as follows

  1. Get the log for a given cell
  2. From the cell get the traceId of the AI generation request
CELLID=01J7KQPBYCT9VM2KFBY48JC7J0
export TRACEID=$(curl -s -X POST http://localhost:8877/api/foyle.logs.LogsService/GetBlockLog -H "Content-Type: application/json" -d "{\"id\": \"${CELLID}\"}" | jq -r .blockLog.genTraceId)
echo TRACEID=$TRACEID
  • Given the traceId, you can fetch the request and response from the LOGS
curl -s -o /tmp/response.json -X POST http://localhost:8877/api/foyle.logs.LogsService/GetLLMLogs -H "Content-Type: application/json" -d "{\"traceId\": \"${TRACEID}\"}"
CODE="$?"
if [ $CODE -ne 0 ]; then
  echo "Error occurred while fetching LLM logs"
  exit $CODE
fi
  • You can view an HTML rendering of the prompt and response
  • If you disable interactive mode for the cell then vscode will render the HTML respnse inline
  • Note There appears to be a bug right now in the HTML rendering causing a bunch of newlines to be introduced relative to what’s in the actual markdown in the JSON request
jq -r '.requestHtml' /tmp/response.json > /tmp/request.html
cat /tmp/request.html
  • To view the response
jq -r '.responseHtml' /tmp/response.json > /tmp/response.html
cat /tmp/response.html
  • To view the JSON versions of the actual requests and response
jq -r '.requestJson' /tmp/response.json | jq .
jq -r '.responseJson' /tmp/response.json | jq '.messages[0].content[0].text'
  • You can print the raw markdown of the prompt as follows
echo $(jq -r '.requestJson' /tmp/response.json | jq '.messages[0].content[0].text')
jq -r '.responseJson' /tmp/response.json | jq .

2 - Troubleshoot Learning

How to troubleshoot and monitor learning

What You’ll Learn

  • How to ensure learning is working and monitor learning

Check Examples

If Foyle is learning there should be example files in ${HOME}/.foyle/training

ls -la ~/.foyle/training

The output should include example.binpb files as illustrated below.

-rw-r--r--    1 jlewi  staff   9895 Aug 28 07:46 01J6CQ6N02T7J16RFEYCT8KYWP.example.binpb

If there aren’t any then no examples have been learned.

Trigger Learning

Foyle’s learning is triggered whenever a cell is successfully executed.

Every time you switch the cell in focus, Foyle creates a new session. The current session ID is displayed in the lower right hand context window in vscode.

You can use the session ID to track whether learning occured.

Did The Session Get Created

  • Check the session log
CONTEXTID=01J7S3QZMS5F742JFPWZDCTVRG
curl -s -X POST https://localhost:8877/foyle.logs.SessionsService/GetSession -H "Content-Type: application/json" -d "{\"contextId\": \"${CONTEXTID}\"}" | jq.
  • If this returns not found then no log was created for this sessions and there is a problem with Log Processing

  • The output should include

    • LogEvent for cell execution
    • LogEvent for session end
    • FullContext containing a notebook
  • Ensure the cells all have IDs

Did we try to create an example from any cells?

  • If Foyle tries to learn from a cell it logs a message here
  • We can query for that log as follows
jq -c 'select(.message == "Found new training example")' ${LASTLOG}
  • If that returns nothing then we know Foyle never tried to learn from any cells
  • If it returns something then we know Foyle tried to learn from a cell but it may have failed
  • If there is an error processing an example it gets logged here
  • So we can search for that error message in the logs
jq -c 'select(.level == "Failed to write example")' ${LASTLOG}
jq -c 'select(.level == "error" and .message == "Failed to write example")' ${LASTLOG}

Ensure Block Logs are being created

  • The query below checks that block logs are being created.
  • If no logs are being processed than there is a problem with the block log processing.
jq -c 'select(.message == "Building block log")' ${LASTLOG}

Check Prometheus counters

Check to make sure blocks are being enqueued for learner processing

curl -s http://localhost:8877/metrics | grep learner_enqueued_total 
  • If the number is 0 please open an issue in GitHub because there is most likely a bug

Check the metrics for post processing of blocks

curl -s http://localhost:8877/metrics | grep learner_sessions_processed
  • The value of learner_sessions_processed{status="learn"} is the number of blocks that contributed to learning
  • The value of learner_sessions_processed{status="unexecuted"} is the number of blocks that were ignored because they were not executed

3 - Cloud Logging

Use Google Cloud Logging for Foyle logs

What You’ll Learn

How to use Google Cloud Logging to monitor Foyle.

Google Cloud Logging

If you are running Foyle locally but want to stream logs to Cloud Logging you can edit the logging stanza in your config as follows:

logging:
  sinks:
    - json: true
      path: gcplogs:///projects/${PROJECT}/logs/foyle     

Remember to substitute in your actual project for ${PROJECT}.

In the Google Cloud Console you can find the logs using the query

logName = "projects/${PROJECT}/logs/foyle"

While Foyle logs to JSON files, Google Cloud Logging is convenient for querying and viewing your logs.

Logging to Standard Error

If you want to log to standard error you can set the logging stanza in your config as follows:

logging:
  sinks:
    - json: false
      path: stderr     

4 - Honeycomb

How to monitor Foyle with Honeycomb

What You’ll Learn

  • How to use Opentelemetry and Honeycomb To Monitor Foyle

Setup

Configure Foyle To Use Honeycomb

foyle config set telemetry.honeycomb.apiKeyFile = /path/to/apikey

Download Honeycomb CLI

  • hccli is an unoffical CLI for Honeycomb.
  • It is being developed to support using Honeycomb with Foyle.
TAG=$(curl -s https://api.github.com/repos/jlewi/hccli/releases/latest | jq -r '.tag_name')
# Remove the leading v because its not part of the binary name
TAGNOV=${TAG#v}
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m)
echo latest tag is $TAG
echo OS is $OS
echo Arch is $ARCH
LINK=https://github.com/jlewi/hccli/releases/download/${TAG}/hccli_${TAGNOV}_${OS}_${ARCH}
echo Downloading $LINK
wget $LINK -O /tmp/hccli

Move the hccli binary to a directory in your PATH.

chmod a+rx /tmp/hccli
sudo mv /tmp/hccli /tmp/hccli

On Darwin set the execute permission on the binary.

sudo xattr -d com.apple.quarantine /usr/local/bin/hccli

Configure Honeycomb

  • In order for hccli to generate links to the Honeycomb UI it needs to know base URL of your environment.
  • You can get this by looking at the URL in your browser when you are logged into Honeycomb.
  • It is typically somethling like
https://ui.honeycomb.io/${TEAM}/environments/${ENVIRONMENT}
  • You can set the base URL in your config
hccli config set baseURL=https://ui.honeycomb.io/${TEAM}/environments/${ENVIRONMENT}/
  • You can check your configuration by running the get command
hccli config get

Measure Acceptance Rate

  • To measure the utility of Foyle we can look at how often Foyle suggestions are accepted
  • When a suggestion is accepted we send a LogEvent of type ACCEPTED
  • This creates an OTEL trace with a span with name LogEvent and attribute eventType == ACCEPTED
  • We can use the query below to calculate the acceptance rate
QUERY='{
  "calculations": [
    {"op": "COUNT", "alias": "Event_Count"}
  ],
  "filters": [
    {"column": "name", "op": "=", "value": "LogEvent"},
    {"column": "eventType", "op": "=", "value": "ACCEPTED"}
  ],
  "time_range": 86400,
  "order_by": [{"op": "COUNT", "order": "descending"}]
}'

hccli querytourl --dataset foyle --query "$QUERY" --open

Token Count Usage

  • The cost of LLMs depends on the number of input and output tokens
  • You can use the query below to look at token usage
QUERY='{
  "calculations": [
    {"op": "COUNT", "alias": "LLM_Calls_Per_Cell"},
    {"op": "SUM", "column": "llm.input_tokens", "alias": "Input_Tokens_Per_Cell"},
    {"op": "SUM", "column": "llm.output_tokens", "alias": "Output_Tokens_Per_Cell"}
  ],
  "filters": [
    {"column": "name", "op": "=", "value": "Complete"}
  ],
  "breakdowns": ["trace.trace_id"],
  "time_range": 86400,
  "order_by": [{"op": "COUNT", "order": "descending"}]
}'

hccli querytourl --dataset foyle --query "$QUERY" --open