Html5 Canvas: Get Event When Drawing Is Finished
Solution 1:
Like almost all Javascript functions, drawImage
is synchronous, i.e. it'll only return once it has actually done what it's supposed to do.
That said, what it's supposed to do, like most other DOM calls, is queue-up lists of things to be repainted next time the browser gets into the event loop.
There's no event you can specifically register to tell you when that is, since by the time any such event handler could be called, the repaint would have already happened.
Solution 2:
Jef Claes explains it pretty well on his website:
Browsers load images asynchronously while scripts are already being interpreted and executed. If the image isn't fully loaded the canvas fails to render it.
Luckily this isn't hard to resolve. We just have to wait to start drawing until we receive a callback from the image, notifying loading has completed.
<scripttype="text/javascript">window.addEventListener("load", draw, true);
functiondraw(){
var img = newImage();
img.src = "https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb_R_NSYqix3ruqgXyzFAVigaanTAWgtbJKb494FjTLQpuSQl_tAfRDM_h5s5G080upaHaKyuuvGdJ6vBlhj6eQelKt8gTXSMcIJ_NSWRd6zXe8jEDpamuXOfNuL3J3P39mhgShiJcrZbf/s1600/aspnethomepageplusdevtools.PNG";
img.onload = function(){
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
context.drawImage(img, 0, 0);
};
}
Solution 3:
You already have an event when the image loads, and you do one thing (draw). Why not do another and call the function that will do whatever it is you want done after drawImage
? Literally just:
myImg.onload = function() {
myContext.drawImage(containerImg, 0, 0, 300, 300);
notify(); // guaranteed to be called after drawImage
};
Solution 4:
drawImage()
as any drawing method on the 2D canvas in itself is "mostly" synchronous.
You can assume that any code that needs a read-back of the pixels will have the updated pixels. Also, for drawImage
in particular, you can even assume that the image will have been fully decoded "synchronously", which can take some time with big images.
Technically, in most modern configs the actual painting work will be deferred to the GPU, which implies some parallelization and some asynchronicity, but read-backs will wait for the GPU has done its work and lock the CPU for that time.
However the drawing on the canvas is only the first step of the full rendering of the canvas to the monitor.
The canvas then needs to go through the CSS compositor, where it will get painted along the rest of the page. This is what is deferred to the next rendering step.
alert()
in Chrome does currently block the CSS compositor, and thus, even though the actual pixels of the canvas buffer have been updated, these changes haven't been reflected by the CSS compositor yet. (In Firefox alert()
triggers a kind of "spin the event loop" which allows the CSS compositor to still kick in, even if the global tasks of the event loop are paused).
To hook to the CSS compositor, there is a requestPostAnimationFrame
method that is being incubated, but apparently got dropped of Chrome experiments recently.
We can polyfill it using both requestAnimationFrame
and a MessageEvent to hook to the next task as soon as possible (setTimeout
is generally given less priority).
Now, even this requestPostAnimationFrame
is only an event for when the browser's compositor kicked in, there is still some time before that image gets to the OS compositor and to the monitor (about a full V-Sync frame).
Some configuration of Chrome on Windows have access to a shortcut that allows the browser to talk directly to the OS compositor, and bypasses the CSS compositor. To enable this option, you can create your 2D context with the desynchhronized
option set to true. However, this option is only supported in a few configurations.
Below is a demo of almost all this:
// requestPostAnimationFrame polyfillif (typeof requestPostAnimationFrame !== "function") {
(() => {
const channel = newMessageChannel();
const callbacks = [];
let timestamp = 0;
let called = false;
let scheduled = false; // to make it work from rAFlet inRAF = false; // to make it work from rAF
channel.port2.onmessage = e => {
called = false;
const toCall = callbacks.slice();
callbacks.length = 0;
toCall.forEach(fn => {
try {
fn(timestamp);
} catch (e) {}
});
}
// We need to overwrite rAF to let us know we are inside an rAF callback// as to avoid scheduling yet an other rAF, which would be one painting frame late// We could have hooked an infinite loop on rAF, but this means// forcing the document to be animated all the time// which is bad for perfsconst rAF = globalThis.requestAnimationFrame;
globalThis.requestAnimationFrame = function(...args) {
if (!scheduled) {
scheduled = true;
rAF.call(globalThis, (time) => inRAF = time);
globalThis.requestPostAnimationFrame(() => {
scheduled = false;
inRAF = false;
});
}
rAF.apply(globalThis, args);
};
globalThis.requestPostAnimationFrame = function(callback) {
if (typeof callback !== "function") {
thrownewTypeError("Argument 1 is not callable");
}
callbacks.push(callback);
if (!called) {
if (inRAF) {
timestamp = inRAF;
channel.port1.postMessage("");
} else {
requestAnimationFrame((time) => {
timestamp = time;
channel.port1.postMessage("");
});
}
called = true;
}
};
})();
}
// now the demo// if the current browser can use desync 2D context// let's try it there too// (I couldn't test it myself, so let me know in comments)const supportsDesyncContext = CanvasRenderingContext2D.prototype.getContextAttributes &&
document.createElement("canvas")
.getContext("2d", { desynchronized: true })
.getContextAttributes().desynchronized;
test(false);
if (supportsDesyncContext) {
setTimeout(() =>test(true), 1000);
}
asyncfunctiontest(desync) {
const canvas = document.createElement("canvas");
document.body.append(canvas);
const ctx = canvas.getContext("2d", { desynchronized: desync });
const blob = awaitfetch("https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png")
.then((resp) => resp.ok && resp.blob());
const bitmap = awaitcreateImageBitmap(blob);
ctx.drawImage(bitmap, 0, 0, 300, 150);
// schedule our callback after renderingrequestPostAnimationFrame(() => {
alert("Right after CSS compositing");
});
// prove that we actually already painted on the canvas// even if the CSS compositor hasn't kicked in yetconst pixelOnCanvas = ctx.getImageData(120,120,1,1).data;
alert("Before CSS compositing." + (desync ? " (desynchronized)": "") + "\nPixel on canvas: " + pixelOnCanvas);
}
Solution 5:
The answer by @MikeGledhill (that got deleted) is essentially the beginning of the answer, though it could have explained it better, and browsers may not have all had the requestAnimationFrame
API available at that time:
Painting of pixels happens in the next animation frame. This means that if you call drawImage
, the screen pixels won't actually be updated at that time, but in the next animation frame.
There's no event for this.
But! We can use requestAnimationFrame
to schedule a callback for the next frame before paint (display update) happens:
myImg.onload = function() {
myContext.drawImage(containerImg, 0, 0, 300, 300);
requestAnimationFrame(() => {
// This function will run in the next animation frame, *right before*// the browser will update the pixels on the display (paint).// To ensure that we run logic *after* the display has been// updated, an option is to queue yet one more callback// using setTimeout.setTimeout(() => {
// At this point, the page rendering has been updated with the// `drawImage` result (or a later frame's result, see below).
}, 0)
})
};
What is happening here:
The requestAnimtionFrame
call schedules a function that will be called right before the browser updated display pixels. After this callback is completed, the browser will continue to synchronously update the display pixels in a following tick that is very similar to a microtask.
The "microtask"-like in which the browser updates the display, happens after your requestAnimationFrame
callback, and happens after all user-created microtasks that a user creates in the callback using Promise.resolve().then()
or an await
statement. This means one cannot make deferred code fire immediately (synchronously) after the paint task happens.
The only way to guarantee that logic will fire after the next paint task, is to use setTimeout
(or a postMessage
trick) to queue a macrotask (not microtask) from an animation frame callback. A macrotask queued from a requestAnimationFrame
callback will fire after all microtasks and microtask-likes, including the task that updates the pixels. The setTimeout (or postMessage) macrotask will not fire synchronously after animation frame microtasks.
This approach is not perfect though. Most of the time, the macrotask queued from setTimeout
(and more likely with postMessage
) will fire before the next animation frame and paint cycle. But, due to the specification of setTimeout
(and postMessage
), there is no guarantee that the delay will be exactly what we specify (0
in this example), and the browser is free to use heuristics and/or hard-coded values like 2ms to determine when is the soonest time to run a setTimeout
(macrotask) callback.
Due to this non-guaranteed non-synchronous nature of macrotask scheduling, it is possible, though in practice unlikely, that your setTimeout
(or postMessage
) callback can fire not just after the current animation frame (and the paint cycle that updates the display), but after the next animation frame (and its paint task), meaning that a macrotask callback has a small chance firing too late for the frame you were targeting. This chance is reduced when using postMessage
instead of setTimeout
.
That being said, this sort of thing is probably something you should not do unless you're trying to write tests that capture painted pixels and compare them to expected results or something similar.
In general, you should schedule any drawing logic (f.e. ctx.drawImage()
) using requestAnimationFrame
, never rely on the actual timing of the paint update, and assume that the user will see what the browser APIs guarantee you've specified for them to see (the browsers have their own tests in place for ensuring their APIs work).
Finally, we don't know what your actual goal is. Most likely this answer may be irrelevant to that goal.
Here's the same example using the postMessage
trick:
let messageKey = 0
myImg.onload = function() {
myContext.drawImage(containerImg, 0, 0, 300, 300);
requestAnimationFrame(() => {
// This function will run in the next animation frame, *right before*// the browser will update the pixels on the display (paint).const key = "Unique message key for after paint callback: "+ messageKey++
// To ensure that we run logic *after* the display has been// updated, an option is to queue yet one more callback// using postMessage.constafterPaint = (event) => {
// Ignore interference from any other messaging in the app.if (event.data != key) returnremoveEventListener('message', afterPaint)
// At this point, the page rendering has been updated with the// `drawImage` result (or a later frame's result, but// more unlikely than with setTimeout, as per above).
}
addEventListener('message', afterPaint)
// Hack: send a message which arrives back to us in a// following macrotask, more likely sooner than with// setTimeout.postMessage(key, '*')
})
};
Post a Comment for "Html5 Canvas: Get Event When Drawing Is Finished"