The API defined in this document taks a valid MediaStream and returns an encoded image in the form of a Blob
(as defined in [[!FILE-API]]). The image is
provided by the capture device that provides the MediaStream. Moreover,
picture-specific settings can be optionally provided as arguments that can be applied to the image being captured.
BlobEvent
. The handler should expect to get a BlobEvent
object as its first parameter.ImageCaptureErrorEvent
. The handler should expect to get an ImageCaptureError
object as its first parameter.SettingsChangeEvent
.FrameGrabEvent
. The handler should expect to get a FrameGrabEvent
object as its first parameter.setOptions()
method of an ImageCapture
object is invoked, then a valid PhotoSettings
object must be passed in the method to the
ImageCapture
object. If the UA can successfully apply the settings, then the UA must fire a SettingsChangeEvent
event at the
onoptions
event handler (if specified). If the UA cannot successfully apply the settings, then the UA
must fire an ImageCaptureErrorEvent
at the ImageCapture
object whose code
is set to OPTIONS_ERROR. takePhoto()
method of an ImageCapture
object is invoked,
then if the readyState
of the VideoStreamTrack
provided in the constructor is not "live", the UA must fire an ImageCaptureErrorEvent
event at the ImageCapture
object with a
new ImageCaptureError
object whose code
is set to INVALID_TRACK. If the UA is unable to execute the takePhoto()
method for any
other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA must fire an ImageCaptureErrorEvent
event at the ImageCapture
object with a
new ImageCaptureError
object whose code
is set to PHOTO_ERROR.
Otherwise it must
queue a task, using the DOM manipulation task source, that runs the following steps:
VideoStreamTrack
into a Blob
containing a single still image. The method of doing
this will depend on the underlying device. Devices
may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo,
and then resume streaming. In this case, the stopping and restarting of streaming should
cause mute
and unmute
events to fire on the Track in question. BlobEvent
event containing the Blob
to the onphoto
event handler (if specified).grabFrame()
method of an ImageCapture
object is invoked, then if the readyState
of the VideoStreamTrack
provided in the contructor is not "live", the UA must fire an ImageCaptureErrorEvent
event at the ImageCapture
object with a
new ImageCaptureError
object whose code
is set to INVALID_TRACK. If the UA is unable to execute the grabFrame()
method for any
other reason, then the UA must fire an ImageCaptureErrorEvent
event at the ImageCapture
object with a
new ImageCaptureError
object whose code
is set to FRAME_ERROR. Otherwise it must
queue a task, using the DOM manipulation task source, that runs the following steps:
VideoStreamTrack
into an ImageData
object (as defined in [[!CANVAS-2D]]) containing a single still frame in RGBA format. The width
and height
of the
ImageData
object are derived from the constraints of the VideoStreamTrack
. FrameGrabEvent
event containing the ImageData
to the onframe
event handler (if specified). {Note: grabFrame()
returns data only once upon being invoked.}FrameGrabEvent
--------------
ImageData
object whose width
and height
attributes indicates the dimensions of the captured frame. FrameGrabEventInit
Dictionary
ImageData
object containing the data to deliver via this event.ImageCaptureErrorEvent
--------------
ImageCaptureError
object whose code
attribute indicates the type of error occurrence. ImageCaptureErrorEventInit
Dictionary
ImageCaptureError
object containing the data to deliver via this event.BlobEvent
--------------
Blob
object whose type attribute indicates the encoding of the blob data. An implementation must return a Blob in a format that is capable of being viewed in an HTML <img>
tag. BlobEventInit
Dictionary
Blob
object containing the data to deliver via this event.SettingsChangeEvent
--------------
PhotoSettings
object whose type attribute indicates the current photo settings. SettingsChangeEventInit
Dictionary
PhotoSettings
object containing the data to deliver via this event.ImageCaptureError
-----------------
The ImageCaptureError
object is passed to an onerror
event handler of an
ImageCapture
object if an error occurred when the object was created or any of its methods were invoked.
ImageCaptureError
object must set its code
value to this constant if an error occurred upon invocation of the grabFrame()
method of the ImageCapture
interface.ImageCaptureError
object must set its code
value to this constant if an error occurred upon invocation of the setOptions()
method of the ImageCapture
interface.ImageCaptureError
object must set its code
value to this constant if an error occurred upon invocation of the takePhoto()
method of the ImageCapture
interface.ImageCaptureError
object must set its code
value to this constant if an error occurred due to indeterminate cause upon invocation of any method of the ImageCapture
interface.code
attribute returns the appropriate code for the error event, derived from the constants defined in the ImageCaptureError
interface.message
attribute must return an error message describing the details of the error encountered.MediaSettingsRange
MediaSettingsItem
The MediaSettingsItem
interface is now defined, which allows for a single setting to be managed.
PhotoOptions
The PhotoOptions attribute of the ImageCapture
object provides
the photo-specific settings options and current settings values. The following definitions are assumed
for individual settings and are provided for information purposes:
Mode | Kelvin range |
---|---|
incandescent | 2500-3500 |
fluorescent | 4000-5000 |
warm-fluorescent | 5000-5500 |
daylight | 5500-6500 |
cloudy-daylight | 6500-8000 |
twilight | 8000-9000 |
shade | 9000-10000 |
WhiteBalanceModeEnum
.ExposureMode
.ExposureMode
PhotoSettings
The PhotoSettings
object is optionally passed into the ImageCapture.setOptions()
method
in order to modify capture device settings specific to still imagery. Each of the attributes in this object
are optional.
ExposureModeEnum
.ImageCapture
If the User Agent supports Promises, then the following may be used. Any Promise object is assumed to have resolver object, with resolve() and reject() methods associated with it.
{NOTE: The setOptions()
method is not recast as a Promise due to the possibility that its associated event handler onoptions
may be repeatedly invoked.}
ImageCaptureErrorEvent
. The handler should expect to get an ImageCaptureError
object as its first parameter.SettingsChangeEvent
.setOptions()
method of an ImageCapture
object is invoked, then a valid PhotoSettings
object must be passed in the method to the
ImageCapture
object. If the UA can successfully apply the settings, then the UA must fire a SettingsChangeEvent
event at the
onoptions
event handler (if specified). If the UA cannot successfully apply the settings, then the UA
must fire an ImageCaptureErrorEvent
at the ImageCapture
object whose code
is set to OPTIONS_ERROR. takePhoto()
method of an ImageCapture
object is invoked, a new Promise object is returned.
If the readyState
of the VideoStreamTrack
provided in the constructor is not "live", the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose code
is set to INVALID_TRACK. If the UA is unable to execute the takePhoto()
method for any
other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose code
is set to PHOTO_ERROR.
Otherwise it must
queue a task, using the DOM manipulation task source, that runs the following steps:
VideoStreamTrack
into a Blob
containing a single still image. The method of doing
this will depend on the underlying device. Devices
may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo,
and then resume streaming. In this case, the stopping and restarting of streaming should
cause mute
and unmute
events to fire on the Track in question. BlobEvent
event containing the Blob
to the resolver object's resolve() method.grabFrame()
method of an ImageCapture
object is invoked, a new Promise object is returned. If the readyState
of the VideoStreamTrack
provided in the contructor is not "live", the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose code
is set to INVALID_TRACK. If the UA is unable to execute the grabFrame()
method for any
other reason, then the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose code
is set to FRAME_ERROR. Otherwise it must
queue a task, using the DOM manipulation task source, that runs the following steps:
VideoStreamTrack
into an ImageData
object (as defined in [[!CANVAS-2D]]) containing a single still frame in RGBA format. The width
and height
of the
ImageData
object are derived from the constraints of the VideoStreamTrack
. FrameGrabEvent
event containing the ImageData
to the resolver object's resolve() method. {Note: grabFrame()
returns data only once upon being invoked.}navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); if (captureDevice) { captureDevice.onphoto = showPicture; if (captureDevice.photoOptions.redEyeReduction) { captureDevice.setOptions({redEyeReductionSetting:true}); } else console.log('No red eye reduction'); captureDevice.onoptions = function(){ if (captureDevice.photoOptions.redEyeReduction.value) captureDevice.takePhoto(); } } } function showPicture(e) { var img = document.querySelector("img"); img.src = URL.createObjectURL(e.data); } function failedToGetMedia{ console.log('Stream failure'); }##### Grabbing a Frame for Post-Processing
navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); if (captureDevice) { captureDevice.onframe = processFrame; captureDevice.grabFrame(); } } function processFrame(e) { imgData = e.imageData; width = imgData.width; height = imgData.height; for (j=3; j < imgData.width; j+=4) { // Set all alpha values to medium opacity imgData.data[j] = 128; } // Create new ImageObject with the modified pixel values var canvas = document.createElement('canvas'); ctx = canvas.getContext("2d"); newImg = ctx.createImageData(width,height); for (j=0; j < imgData.width; j++) { newImg.data[j] = imgData.data[j]; } // ... and do something with the modified image ... } function failedToGetMedia{ console.log('Stream failure'); }##### Repeated grabbing of a frame
<html> <body> <p><canvas id="frame"></canvas></p> <button onclick="stopFunction()">Stop frame grab</button> <script> var canvas = document.getElementById('frame'); navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); var frameVar; if (captureDevice) { captureDevice.onframe = processFrame; frameVar = setInterval(captureDevice.grabFrame(), 1000); } } function processFrame(e) { imgData = e.imageData; canvas.width = imgData.width; canvas.height = imgData.height; canvas.getContext('2d').drawImage(imgData, 0, 0,imgData.width,imgData.height); } function stopFunction(e) { clearInterval(myVar); } </script> </body> </html>##### Taking a picture if Red Eye Reduction is activated using promises
navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); if (captureDevice) { if (captureDevice.photoOptions.redEyeReduction) { captureDevice.setOptions({redEyeReductionSetting:true}); } else console.log('No red eye reduction'); captureDevice.onoptions = function(){ if (captureDevice.photoOptions.redEyeReduction.value) captureDevice.takePhoto().then(showPicture(blob),function(error){alert("Failed to take photo");}); } } } function showPicture(e) { var img = document.querySelector("img"); img.src = URL.createObjectURL(e.data); } function failedToGetMedia{ console.log('Stream failure'); }