Comments on this document are welcomed.
This document specific the takePhoto() and grabFrame() methods, and corresponding camera settings for use with MediaStreams as defined in Media Capture and Streams [[!GETUSERMEDIA]].
Introduction ------------

The API defined in this document taks a valid MediaStream and returns an encoded image in the form of a Blob (as defined in [[!FILE-API]]). The image is provided by the capture device that provides the MediaStream. Moreover, picture-specific settings can be optionally provided as arguments that can be applied to the image being captured.

Image Capture API --------------
interface ImageCapture: EventTarget
readonly attribute PhotoOptions photoOptions
Describes current photo settings
readonly attribute VideoStreamTrack videoStreamTrack
The MediaStreamTrack passed into the constructor
readonly attribute MediaStream previewStream
The MediaStream that provides a camera preview
attribute EventHandler onphoto
Register/unregister for photo events of type BlobEvent. The handler should expect to get a BlobEvent object as its first parameter.
attribute EventHandler onerror
Register/unregister for Image Capture error events of type ImageCaptureErrorEvent. The handler should expect to get an ImageCaptureError object as its first parameter.
attribute EventHandler onoptions
Register/unregister for photo settings change events of type SettingsChangeEvent.
attribute EventHandler onframe
Register/unregister for frame capture events of type FrameGrabEvent. The handler should expect to get a FrameGrabEvent object as its first parameter.
void setOptions(PhotoSettings? photoSettings)
When the setOptions() method of an ImageCapture object is invoked, then a valid PhotoSettings object must be passed in the method to the ImageCapture object. If the UA can successfully apply the settings, then the UA must fire a SettingsChangeEvent event at the onoptions event handler (if specified). If the UA cannot successfully apply the settings, then the UA must fire an ImageCaptureErrorEvent at the ImageCapture object whose code is set to OPTIONS_ERROR.
void takePhoto ()
When the takePhoto() method of an ImageCapture object is invoked, then if the readyState of the VideoStreamTrack provided in the constructor is not "live", the UA must fire an ImageCaptureErrorEvent event at the ImageCapture object with a new ImageCaptureError object whose code is set to INVALID_TRACK. If the UA is unable to execute the takePhoto() method for any other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA must fire an ImageCaptureErrorEvent event at the ImageCapture object with a new ImageCaptureError object whose code is set to PHOTO_ERROR. Otherwise it must queue a task, using the DOM manipulation task source, that runs the following steps:
  1. Gather data from the VideoStreamTrack into a Blob containing a single still image. The method of doing this will depend on the underlying device. Devices may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo, and then resume streaming. In this case, the stopping and restarting of streaming should cause mute and unmute events to fire on the Track in question.
  2. Raise a BlobEvent event containing the Blob to the onphoto event handler (if specified).
void grabFrame()
When the grabFrame() method of an ImageCapture object is invoked, then if the readyState of the VideoStreamTrack provided in the contructor is not "live", the UA must fire an ImageCaptureErrorEvent event at the ImageCapture object with a new ImageCaptureError object whose code is set to INVALID_TRACK. If the UA is unable to execute the grabFrame() method for any other reason, then the UA must fire an ImageCaptureErrorEvent event at the ImageCapture object with a new ImageCaptureError object whose code is set to FRAME_ERROR. Otherwise it must queue a task, using the DOM manipulation task source, that runs the following steps:
  1. Gather data from the VideoStreamTrack into an ImageData object (as defined in [[!CANVAS-2D]]) containing a single still frame in RGBA format. The width and height of the ImageData object are derived from the constraints of the VideoStreamTrack.
  2. Raise a FrameGrabEvent event containing the ImageData to the onframe event handler (if specified). {Note: grabFrame() returns data only once upon being invoked.}
FrameGrabEvent --------------
readonly attribute ImageData imageData
Returns an ImageData object whose width and height attributes indicates the dimensions of the captured frame.
##### FrameGrabEventInit Dictionary
ImageData imageData
An ImageData object containing the data to deliver via this event.
ImageCaptureErrorEvent --------------
readonly attribute ImageCaptureError imageCaptureError
Returns an ImageCaptureError object whose code attribute indicates the type of error occurrence.
##### ImageCaptureErrorEventInit Dictionary
ImageCaptureError imageCaptureError
an ImageCaptureError object containing the data to deliver via this event.
BlobEvent --------------
readonly attribute Blob data
Returns a Blob object whose type attribute indicates the encoding of the blob data. An implementation must return a Blob in a format that is capable of being viewed in an HTML <img> tag.
##### BlobEventInit Dictionary
Blob data
A Blob object containing the data to deliver via this event.
SettingsChangeEvent --------------
readonly attribute PhotoSettings photoSettings
Returns a PhotoSettings object whose type attribute indicates the current photo settings.
##### SettingsChangeEventInit Dictionary
PhotoSettings photoSettings
A PhotoSettings object containing the data to deliver via this event.
ImageCaptureError -----------------

The ImageCaptureError object is passed to an onerror event handler of an ImageCapture object if an error occurred when the object was created or any of its methods were invoked.

const unsigned short FRAME_ERROR=1
An ImageCaptureError object must set its code value to this constant if an error occurred upon invocation of the grabFrame() method of the ImageCapture interface.
const unsigned short OPTIONS_ERROR=2
An ImageCaptureError object must set its code value to this constant if an error occurred upon invocation of the setOptions() method of the ImageCapture interface.
const unsigned short PHOTO_ERROR=3
An ImageCaptureError object must set its code value to this constant if an error occurred upon invocation of the takePhoto() method of the ImageCapture interface.
const unsigned short ERROR_UNKNOWN=4
An ImageCaptureError object must set its code value to this constant if an error occurred due to indeterminate cause upon invocation of any method of the ImageCapture interface.
readonly attribute unsigned short code
The code attribute returns the appropriate code for the error event, derived from the constants defined in the ImageCaptureError interface.
readonly attribute DOMString message
The message attribute must return an error message describing the details of the error encountered.

MediaSettingsRange

readonly attribute unsigned long max
The maximum value of this setting
readonly attribute unsigned long min
The minimum value of this setting
readonly attribute unsigned long initial
The current value of this setting

MediaSettingsItem

The MediaSettingsItem interface is now defined, which allows for a single setting to be managed.

readonly attribute any value
Value of current setting.

PhotoOptions

The PhotoOptions attribute of the ImageCapture object provides the photo-specific settings options and current settings values. The following definitions are assumed for individual settings and are provided for information purposes:

  1. Autofocus is a setting that enables the camera hardware to automatically focus on a selected part of the imaging area.
  2. White balance mode is a setting that cameras use to adjust for different color temperatures. Color temperature is the temperature of background light (measured in Kelvin normally). This setting can also be automatically determined by the implementation. If 'automatic' mode is selected, then the Kelvin setting for White Balance Mode may be overridden. Typical temprature ranges for different modes are provided below:
    Mode Kelvin range
    incandescent 2500-3500
    fluorescent 4000-5000
    warm-fluorescent 5000-5500
    daylight 5500-6500
    cloudy-daylight 6500-8000
    twilight 8000-9000
    shade 9000-10000
  3. Exposure is the amount of light allowed to fall on the photographic medium. Auto-exposure mode is a camera setting where the exposure levels are automatically adjusted by the implementation based on the subject of the photo.
  4. Exposure Compensation is a numeric camera setting that adjusts the exposure level from the current value used by the implementation. This value can be used to bias the exposure level enabled by auto-exposure.
  5. The ISO setting of a camera describes the sensistivity of the camera to light. It is a numeric value, where the lower the value the greater the sensitivity. This setting in most implementations relates to shutter speed, and is sometimes known as the ASA setting.
  6. Red Eye Reduction is a feature in cameras that is designed to limit or prevent the appearance of red pupils ("Red Eye") in photography subjects due prolonged exposure to a camera's flash.
  7. Brightness refers to the numeric camera setting that adjusts the perceived amount of light emitting from the photo object. A higher brightness setting increases the intensity of darker areas in a scene while compressing the intensity of brighter parts of the scene.
  8. Contrast is the numeric camera setting that controls the difference in brightness between light and dark areas in a scene. A higher contrast setting reflects an expansion in the difference in brightness.
  9. Saturation is a numeric camera setting that controls the intensity of color in a scene (i.e. the amount of gray in the scene). Very low saturation levels will result in photo's closer to black-and-white.
  10. Sharpness is a numeric camera setting that controls the intensity of edges in a scene. Higher sharpness settings result in higher edge intensity, while lower settings result in less contrast and blurrier edges (i.e. soft focus).
  11. Zoom is a numeric camera setting that controls the focal length of the lens. The setting usually represents a ratio, e.g. 4 is a zoom ratio of 4:1. The minimum value is usually 1, to represent a 1:1 ratio (i.e. no zoom).
attribute MediaSettingsItem autoWhiteBalanceMode
This reflects whether automated White Balance Mode selection is on or off, and is boolean - on is true
attribute MediaSettingsRange whiteBalanceMode
This reflects the current white balance mode setting. Values are of type WhiteBalanceModeEnum.
attribute ExposureMode autoExposureMode
This reflects the current auto exposure mode setting. Values are of type ExposureMode.
attribute MediaSettingsRange exposureCompensation
This reflects the current exposure compensation setting and permitted range. Values are numeric.
attribute MediaSettingsRange iso
This reflects the current camera ISO setting and permitted range. Values are numeric.
This feature reflects the current exposure level for recorded images. Values are numeric.
attribute MediaSettingsItem redEyeReduction
This reflects whether camera red eye reduction is on or off, and is boolean - on is true
attribute MediaSettingsRange brightness
This reflects the current brightness setting of the camera and permitted range. Values are numeric.
attribute MediaSettingsRange constrast
This reflects the current contrast setting of the camera and permitted range. Values are numeric.
attribute MediaSettingsRange saturation
This reflects the current saturation setting of the camera and permitted range. Values are numeric.
attribute MediaSettingsRange sharpness
This reflects the current sharpness setting of the camera and permitted range. Values are numeric.
attribute MediaSettingsRange imageHeight
This reflects the image height range supported by the UA and the current height setting.
attribute MediaSettingsRange imageWidth
This reflects the image width range supported by the UA and the current width setting.
attribute MediaSettingsRange zoom
This reflects the zoom value range supported by the UA and the current zoom setting.
attribute boolean autofocus
This reflects the current autofocus setting. false means autofocus is disabled.

ExposureMode

frame-average
Average of light information from entire scene
center-weighted
Sensitivity concentrated towards center of viewfinder
spot-metering
Spot-centered weighting

PhotoSettings

The PhotoSettings object is optionally passed into the ImageCapture.setOptions() method in order to modify capture device settings specific to still imagery. Each of the attributes in this object are optional.

attribute boolean autoWhiteBalanceMode
This reflects whether automatic White Balance Mode selection is desired.
attribute unsigned long whiteBalanceMode
This reflects the desired white balance mode setting.
attribute any autoExposureMode
This reflects the desired auto exposure mode setting. Acceptable values are of type ExposureModeEnum.
attribute unsigned long exposureCompensation
This reflects the desired exposure compensation setting.
attribute unsigned long iso
This reflects the desired camera ISO setting.
This feature reflects the current exposure level for recorded images. Values are numeric.
attribute boolean redEyeReduction
This reflects whether camera red eye reduction is desired
attribute unsigned long brightness
This reflects the desired brightness setting of the camera.
attribute unsigned long constrast
This reflects the desired contrast setting of the camera.
attribute unsigned long saturation
This reflects the desired saturation setting of the camera.
attribute unsigned long sharpness
This reflects the desired sharpness setting of the camera.
attribute unsigned long imageHeight
This reflects the desired image height.
attribute unsigned long imageWidth
This reflects the desired image width.
attribute unsigned long zoom
This reflects the desired zoom value.
attribute boolean autofocus
This reflects the desired autofocus setting.

Promise Extensions to ImageCapture

If the User Agent supports Promises, then the following may be used. Any Promise object is assumed to have resolver object, with resolve() and reject() methods associated with it. {NOTE: The setOptions() method is not recast as a Promise due to the possibility that its associated event handler onoptions may be repeatedly invoked.}

interface ImageCapture: EventTarget
readonly attribute PhotoOptions photoOptions
Describes current photo settings
readonly attribute VideoStreamTrack videoStreamTrack
The MediaStreamTrack passed into the constructor
readonly attribute MediaStream previewStream
The MediaStream that provides a camera preview
attribute EventHandler onerror
Register/unregister for Image Capture error events of type ImageCaptureErrorEvent. The handler should expect to get an ImageCaptureError object as its first parameter.
attribute EventHandler onoptions
Register/unregister for photo settings change events of type SettingsChangeEvent.
void setOptions(PhotoSettings? photoSettings)
When the setOptions() method of an ImageCapture object is invoked, then a valid PhotoSettings object must be passed in the method to the ImageCapture object. If the UA can successfully apply the settings, then the UA must fire a SettingsChangeEvent event at the onoptions event handler (if specified). If the UA cannot successfully apply the settings, then the UA must fire an ImageCaptureErrorEvent at the ImageCapture object whose code is set to OPTIONS_ERROR.
Promise takePhoto ()
When the takePhoto() method of an ImageCapture object is invoked, a new Promise object is returned. If the readyState of the VideoStreamTrack provided in the constructor is not "live", the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a new ImageCaptureError object whose code is set to INVALID_TRACK. If the UA is unable to execute the takePhoto() method for any other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a new ImageCaptureError object whose code is set to PHOTO_ERROR. Otherwise it must queue a task, using the DOM manipulation task source, that runs the following steps:
  1. Gather data from the VideoStreamTrack into a Blob containing a single still image. The method of doing this will depend on the underlying device. Devices may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo, and then resume streaming. In this case, the stopping and restarting of streaming should cause mute and unmute events to fire on the Track in question.
  2. Return a BlobEvent event containing the Blob to the resolver object's resolve() method.
Promise grabFrame()
When the grabFrame() method of an ImageCapture object is invoked, a new Promise object is returned. If the readyState of the VideoStreamTrack provided in the contructor is not "live", the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a new ImageCaptureError object whose code is set to INVALID_TRACK. If the UA is unable to execute the grabFrame() method for any other reason, then the UA must return an ImageCaptureErrorEvent event to the resolver object's reject() method with a new ImageCaptureError object whose code is set to FRAME_ERROR. Otherwise it must queue a task, using the DOM manipulation task source, that runs the following steps:
  1. Gather data from the VideoStreamTrack into an ImageData object (as defined in [[!CANVAS-2D]]) containing a single still frame in RGBA format. The width and height of the ImageData object are derived from the constraints of the VideoStreamTrack.
  2. Return a FrameGrabEvent event containing the ImageData to the resolver object's resolve() method. {Note: grabFrame() returns data only once upon being invoked.}
Examples ------- ##### Taking a picture if Red Eye Reduction is activated
    navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
    
   function gotMedia(mediastream) {
          //Extract video track.
          var videoDevice = mediastream.getVideoTracks()[0];
          // Check if this device supports a picture mode...
          var captureDevice = new ImageCapture(videoDevice);
          if (captureDevice) {
                captureDevice.onphoto = showPicture;
                if (captureDevice.photoOptions.redEyeReduction) {
                   captureDevice.setOptions({redEyeReductionSetting:true});
                   }
                else
                   console.log('No red eye reduction');
                captureDevice.onoptions = function(){
                   if (captureDevice.photoOptions.redEyeReduction.value)
                       captureDevice.takePhoto();
                   }
                }
            }

    function showPicture(e) {
           var img = document.querySelector("img");
           img.src = URL.createObjectURL(e.data);
           }
           
    function failedToGetMedia{
           console.log('Stream failure');
           }
    
##### Grabbing a Frame for Post-Processing
    navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
    
   function gotMedia(mediastream) {
          //Extract video track.
          var videoDevice = mediastream.getVideoTracks()[0];
          // Check if this device supports a picture mode...
          var captureDevice = new ImageCapture(videoDevice);
          if (captureDevice) {
                captureDevice.onframe = processFrame;
                captureDevice.grabFrame();
                }
            }

    function processFrame(e) {
           imgData = e.imageData;
           width = imgData.width;
           height = imgData.height;
           for (j=3; j < imgData.width; j+=4)
                {
                // Set all alpha values to medium opacity
                imgData.data[j] = 128;
                }
           // Create new ImageObject with the modified pixel values
           var canvas = document.createElement('canvas');
           ctx = canvas.getContext("2d");
           newImg = ctx.createImageData(width,height);
           for (j=0; j < imgData.width; j++)
                {
                newImg.data[j] = imgData.data[j];
                }
           // ... and do something with the modified image ...
           }
           
    function failedToGetMedia{
           console.log('Stream failure');
           }
    
##### Repeated grabbing of a frame
    <html>
   <body>
   <p><canvas id="frame"></canvas></p> 
   <button onclick="stopFunction()">Stop frame grab</button>
   <script>
   var canvas = document.getElementById('frame');
   navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);

   function gotMedia(mediastream) {
          //Extract video track.  
          var videoDevice = mediastream.getVideoTracks()[0];
          // Check if this device supports a picture mode...
          var captureDevice = new ImageCapture(videoDevice);
          var frameVar;
          if (captureDevice) {
                captureDevice.onframe = processFrame;
                frameVar = setInterval(captureDevice.grabFrame(), 1000);
                }
            }

    function processFrame(e) {
            imgData = e.imageData;
            canvas.width = imgData.width;
            canvas.height = imgData.height;
            canvas.getContext('2d').drawImage(imgData, 0, 0,imgData.width,imgData.height);
            }
            
    function stopFunction(e) {
            clearInterval(myVar);
            }
   </script>
   </body>
   </html>   
    
##### Taking a picture if Red Eye Reduction is activated using promises
    navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia);
    
   function gotMedia(mediastream) {
          //Extract video track.
          var videoDevice = mediastream.getVideoTracks()[0];
          // Check if this device supports a picture mode...
          var captureDevice = new ImageCapture(videoDevice);
          if (captureDevice) {
                if (captureDevice.photoOptions.redEyeReduction) {
                   captureDevice.setOptions({redEyeReductionSetting:true});
                   }
                else
                   console.log('No red eye reduction');
                captureDevice.onoptions = function(){
                   if (captureDevice.photoOptions.redEyeReduction.value)
                       captureDevice.takePhoto().then(showPicture(blob),function(error){alert("Failed to take photo");});
                   }
                }
            }

    function showPicture(e) {
           var img = document.querySelector("img");
           img.src = URL.createObjectURL(e.data);
           }
           
    function failedToGetMedia{
           console.log('Stream failure');
           }