-
Notifications
You must be signed in to change notification settings - Fork 4.6k
fix(trace): Reload traces immediately on service worker start #37551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Test results for "tests 1"2 failed 2 flaky46922 passed, 821 skipped Merge workflow run. |
} | ||
} | ||
|
||
let loadTracesPromise = loadTracesFromIndexDB(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want all the traces to be loaded for inactive clients. I think we want to move the if (!clientIdToTraceUrls.has(event.clientId)) {
block of the original code up in doFetch and omit it for /context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Although that would mean that if the first request after a reload is a /trace/snapshot
call, the trace is not reloaded (since this is a different client worker (event.clientId
) then the one that called /contexts
) and we still return 404. Right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also we have to think about scenarios like:
- If one client re-connects, its traces are loaded again. But what if it then calls
/ping
, it willgc
and remove traces (params in IndexDB) for all other clients.
Sidenote: How important is the gc
? Isn't the browser already taking care of this by shutting down the SW? (we can always do the await self.clients.matchAll();
check whenever the SW restarts)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The GC is so that you could open 100 traces one after another in the same window (UI Mode) without exploding out of memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Although that would mean that if the first request after a reload is a /trace/snapshot call, the trace is not reloaded (since this is a different client worker (event.clientId) then the one that called /contexts) and we still return 404. Right?
Why is it a different client worker? I assume event.clientId are preserved, so should return the right value.
Also we have to think about scenarios like: If one client re-connects, its traces are loaded again. But what if it then calls /ping, it will gc and remove traces (params in IndexDB) for all other clients.
Why does gc remove traces for all other clients? I would assume that once the sw is restarted, self.clients.matchAll() would return them and no gc will happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it a different client worker? I assume event.clientId are preserved, so should return the right value.
I should have been more clear here. What I meant is that each tracer viewer has two clients: one for the trace viewer itself (which also loads the trace) and one for the iframe (which fetches /snapshot
). They have different event.clientId
s (but indeed on a restart their Ids remain the same).
But I actually don't think it is currently possible for the /snapshot
call to arrive first after a SW restart (its clientId
is not linked to a saved trace). Since when selecting an action in the trace viewer, always first a /snapshotInfo
call will be made (which reloads the trace). Then after that the /snapshot
call is made, but at that time the trace has thus already been reloaded.
Great find!!! |
I uploaded #37656 with the fix. If you think there are different use cases we need to address, please feel free to add a test and it'll be clear what is being fixed! |
@pavelfeldman That indeed fixes some of the use cases, I am currently trying different scenarios: I have the following scenario which fails (was not sure how to put this into an actual test): Given you open up 2 tabs with trace viewer in the same browser Underlaying cause: The |
This is a follow up from PR #37442, it fixes my PR comments mentioned there:
With the current restart mechanism, the traces would not reload since the
/ping
calls would come in first and clear (via thegc
) the IDB storedclientIdToTraceUrls
(since clientIdToTraceUrls is empy map at that point).This PR introduces new way: it simply loads all the traces from IDB whenever the SW is restarted. When handling the fetches it will have to wait for this loading to complete, else
/ping
or/contexts
might alter the IDB/localclientIdToTraceUrls
.Tip for testing: You can unregister and stop (which basically simulates a restart) service workers at
chrome://serviceworker-internals/?devtools
.fixes: #37421