An image that can be displayed as a the video is of 24 planes (TrueColor) and cannot be rendered to a pix map.
In the video capture board, the member opaque of the MVEX RenderModel structure is set to True, and the pixel value cannot be referenced.
An invalid value is also set to the red-mask, green-mask, and blue_mask members of the
same structure.
Home |
---|
The spirit of this extension is to provide an X interface to the generally interesting aspects of displaying live video in windows, capturing graphics from windows and converting them to a video signal. Based on an earlier extension called VEX, this extension endevors to provide the minimal support needed for integrating video into the window system.
For applications that need access to the digitized video pixels, this extension allows normal pixmaps or windows with core visual classes to hold video, which in turn allows read and write access to the pixels via the core protocol or other extensions such as the X Image Extension. For hardware that does not actually digitize any pixels, MVEX introduces two new visual classes: VideoColor and VideoGray. Their purpose is to express the visual aspects of a visible but untouchable video picture, hence drawing on a window with one of these visual classes has no effect and the pixels are undefined.
Today, video input and output hardware has complex limitations and capabilities; the
capabilities not always scale in expected ways, nor do the limitations always make sense.
The MVEX extension can provide large amounts of information about these limitations and
capabilities. It is intended that the client, through the use of convenience routines in
the MVEXlib implementation, will be able to easily determine what it can and can't do
without the overhead of a round-trip request to the server.
There are three important classes of video input hardware. Each class produces a video
picture on the screen using a different method, and the methods affects the visual class
used to represent the window or pixmap that contains it. MVEX attempts to recognize all
three classes:
Windows created with VideoGray display only the luminance portion of a video signal; windows created with VideoColor are able to display a color picture if chrominance information is available in the video signal. A side effect of this visual class is that the visible rendition of regions not covered by a RenderVideo request are undefined, including the window's border. However, it is expected that server implementors will pick reasonable defaults.
FRACTION: | [ numerator,denominator: INT32] |
FRACTIONRANGE: | [ | num-base,num-inc,num-limit: INT32] num-type: {Linear,Geometric} denom-base,denom-inc,denom-limit; INT32 denom-type: {Linear,Geometric}] |
Linear: | n: | {n=base to limit step inc |
Geometric: | nexp: | {n=base; exp=inc to limit step 1 |
A FRACTIONRANGE supplied by QueryVideo is guaranteed never to generate a fraction with a zero denominator, or a fractional numerator or denominator. For Linear, num-inc and denom-inc are guaranteed to be greater than zero, and the limit is always guaranteed to be -base plus some integral multiple of -inc. For Geometric, num-inc and denom-inc are guaranteed to be greater than or equal to zero. For example, a FRACTIONRANGE of {512, 128, 1024, Linear, 512, 512, 1024, Linear } creates a set containing
A FRACTIONRANGE of {2,0,2, Geometric, 1,1,127, Linear} creates the set
A FRACTIONRANGE of {0, 1, 63, Linear, 63, 1, 63, Linear} creates the set
In this last example, the fixed denominator is expressed with an increment of one because the protocol requires it to be>0.
FRAME: | [ | negative: BOOL frame: CARD32 field: CARD8 ] |
TIMECODE: | [ | negative: BOOL hour,minute,second,frame,field: CARD8 ] |
RECTANGLERANGE: | [ | base, limit: RECTANGLE x-inc16,y-inc: INT16 width-inc,height-inc: CARD16 type: {Linear,Geometric}] |
Linear: | x = | { base.x,base.x+x inc,base.x+2x-inc,..., limit.x } |
y = | { base.y,base.y+y inc,base.y+2y-inc, ..., limit.y } | |
width = | { base.width,base.width+width-inc, base.width+2width-inc, ...,limit.width } | |
height = | { base.height,base.height+height-inc, base.height+2height-inc,...,limit.height } | |
Geometric: | x = | { base._xx-inc,base.xx-inc+1,...,base.xlimt.x } |
y = | { base.xy-inc,base.xy-inc+1,...,base.ylimit.y } | |
width = | { base.widthwidth-inc,base.widthwidth-inc+1,...,base.widthlimit.width } | |
height = | { base.heightheight-inc,base.heightheight-inc+1,...,base.heightlimit.height } |
A RECTANGLERANGE supplied by QueryVideo is guaranteed never to generate a
rectangle with a fractional component. For Linear, the width-inc and height-inc
values are guaranteed to be greater than zero, and the x-inc and y-inc are
guaranteed to be nonzero. For example, the RECTANGLERANGE whose value is
[ | base = {0,0,320,240}, limit = {960,784,320,240} x-inc=16,y-inc=1 width-inc=0,height-inc=0 type=Linear ] |
PLACEMENT: | [ | frame rate: FRACTION source,destination: RECTANGLERANGE x-scale,y-scale: FRACTIONRANGE identity-aspect: BOOL ] |
frame-rate | = | [ | 30,1 ] |
source | = | [ | base={0,0,10,15},limit={639,479,640,480} x-inc= 1,y-inc= 1 width-inc= 1,height-inc= 1 type=Linear ] |
destination | = | [ | base={0,0,320,240},limit={960,784,320,240} x-inc= 16,y-inc= 1 width-inc=0,height-inc=0 type=Linear ] |
x-scale | = | [ | num-{base,inc,limit}= {2,0,6} type=Geometric denom-{base,inc,limit}={1,1,0} type=Linear } |
y-scale | = | [ | num-{base,inc,limit}= {2,0,5} type =Geometric denom-{base,inc,limit}={1,1,0} type=Linear } |
identity-aspect | = | False |
it implies that the source rectangles may be any with the constraints that
and the destination rectangles may be any selected from the earlier example. The
combined rectangles are limited to those source-destination pairs where the fractions
can be reduced to one of the fractions from the respective sets
And for this example, using any of these source-destination pairs, the hardware can
maintain a nominal 30 fps.
VIDEOABILITY: | [ | saturation: LIST ofFRACTIONRANGE normal-saturation: FRACTION contrast: LISTofFRACTIONRANGE normal-contrast: FRACTION hue: LISTofFRACTIONRANGE normal-hue: FRACTION bright: LISTofFRACTIONRANGE ] normal-bright: FRACTION |
Saturation is the amount
of color information present in the video image. Small values result in black and white
images; large values provide increasing amounts of color. The normal-saturation is a
suggested mean value.
Contrast determines the dynamic range of information present in the video image. Small values result in an image of constant color and intensity; large values produce more contrast. The normal-contrast is a suggested mean value.
Hue shifts the phase of the video's color information relative to its reference, causing a shift in color. A middle value results in no shift, smaller values shift red towards blue and larger values shift red towards green. The normal-hue is a suggested mean value.
Brightness determines the black level of the video signal. Small values result in a dimmer image, large values produce a brighter image. The normal-brightness is a suggested mean value.
VIDEOGEOMETRY: | [ | signal-frame-rate: FRACTION signal-field-rate: FRACTION signal-width,signal-height: CARD16 concurrent-use: CARD 16 priority-steps: CARD 16 reference-id: VREFERENCE placement: LISTofPLACEMENT ] |
For video input, the signal-width and signal-height describe the dimensions of the signal picture as if it were placed directly on the workstation screen. For video output, they describe the dimensions of a screen region whose pixels would map one-to-one to the output signal. It is important to note that the frame-buffer holding the video picture, if any, may not actually have these dimensions; the numbers describe the extent of the source rectangle for video input, or the destination rectangle for the video output.
For a video input and output, the concurrent-use indicates the number of times the resource may be in simultaneous use, and suggests the maximum number identifiers a client should create using CreateVideo. The server should publish a number that the hardware can support with respect to connectivity and quality of the input or output. Clients should use this number in deciding how many resources they may use, and a video resource manager should use this in deciding how to handle allocation among multiple clients.
The protocol represents priority in RenderVideo and CaptureGraphics as a number between 0 and 100 inclusive. The priority-steps conveys the resolution of this adjustment for a particular video resource; its value is guaranteed to be greater than 0. The resolution is calculated by dividing 100 by the number of steps and rounding to the nearest integer. The resulting number is iteratively added to 0 to reach the beginning of the next priority level. Note that this means the last step will be larger than others by one or two. For example, a value of 1 means that priority adjustments are ignored; a value of 2 means that there are two real priority levels, those below 50(100 divided by 2) and those greater or equal to 50; a value of 3 means three levels: 0-32, 33-65, 66-100; etc.
The reference id uniquely identifies the resource, and this id is used by the CreateVideo request to create video input and output identifiers. If the same reference id appears in a VIDEOGEOMETRY returned from different screens, then the concurrent use for will be the same and a video manager can infer that one physical resource services both screens. This implies that usage of this resource on one screen reduces the available usage on another screen.
Each placement element in the list describes its own range of values. The complete range of placement parameters is derived from the entire list, but parts of one element in the list may not be combined with another. For example, source and destination rectangles described in one placement may not be combined with scale factors described in another placement.
For simplicity, a server's list of placements may be zero length. This implies that any placement is acceptable by the hardware.
RENDERMODEL: | [ | depth: CARD8 visual-id: VISUALID opaque: BOOL red-mask, green-mask,blue-mask: CARD32 ] |
The red, green and blue mask are those used for CaptureGraphics when the source is a pixmap and the colormap is None; and for RenderVideo when the destination is a pixmap and opaque is false. A server may supply a RENDERMODEL with a visual id of None for depths that can only be captured from or rendered to pixmaps; and a RENDERMODEL may be supplied with red-, green-, and blue-mask set to zero; but not both in the same RENDERMODEL.
It is conceivable that a system will support a depth for pixmaps, but not for windows. A server may support a gray scale CaptureGraphics for this depth by supplying a RENDERMODEL with visual id None and red- green- and blue- masks with identical bits set.
Note that if opaque is true, then the visual id is guaranteed to have a core visual class.
OWNER: | [ | wid: WINDOW vid: VIDEOIO ] |
MVEX introduces one new error type.
Error | Description |
Video | A value for a VIDEOIO argument does not name a VREFERENCE or provided by QueryVideo. |
Errors: Window, Match
The video-depths
specify what depths/visuals are unique for video input and output. These are
guaranteed to be different from those returned in the X connection setup; the list may be
null; pixmaps are supported for each depth listed. Further, the presence of a MVEX
extension in a server may cause the LISTofFORMAT provided by the connection setup to be
extended with additional formats that would allow GetImage and PutImage access
to windows or pixmaps created with depths and visuals not published in the connection
setup. The definition of DEPTH is included in the core protocol's description of the
connection information.
Visual ids found in video-depths having a core visual class (PseudoColor,
TrueColor, etc.) imply that graphic requests with corresponding windows are expensive.
Thus, it may be that pixels must undergo software translation before or after graphic
requests. Depths and visual ids listed in allowed-depths that are selected from the
X connection setup imply that they are not expensive, even though they may also be used
for RenderVideo and CaptureGraphics requests.
The allowed-depths specify what depths and visual ids are supported for use with
video input and video output. These depths and visual ids include those listed in
video-depths plus appropriate ones selected from those provided by the X connection setup.
The list is in no particular order. For each unique depth, at most one red-, green-, blue-
mask will have non zero values. This avoids ambiguity over which mask to use for a given
depth when a client requests RenderVideo or CaptureGraphics with colormap
None.
The in-attr list the attributes of the decoder or digitizer used by the video inputs. The outattr list the attributes of the encoder used by the vide outputs The entries in both lists correspond one-to-one with the unique video input renderers and video output encoders; the length of these lists imply the number of rows in video input models and video output models. CreateVideo references these resources by this implied index, starting from 0.
The in-ability are lists of device abilities (no pun intended) specific to the
display of video on the workstation display, one list for each input in the same order as
the in-attr. The outability are lists of device abilities specific to the
production of video output from the contents of the window system, one list for each
output in the same order as the out-attr. This list may expand in future versions of MVEX
as other abilities appear common to most video input and output hardware.
The in-ports are lists of input port names (encoded as atoms), one list for each input in the same order as the in-attr. Similarly, the out-ports are lists of output port names, one list for each output in the same order as the out-attr. The intent is that each list of ports is fully dedicated to the corresponding input or output; any additional switching or negotiation for connections should be addressed by a device control server. The string used to encode the atoms is site-specific and is intended as marker in a user interface such as the name for a menu item or a button label. Further meaning should be handled by a device control server.
The list of video input models should be interpreted as a two dimensional array, column moving fastest and represents the relationship between the video input resources and the set of depths and visualids used to create video windows(VW). The columns are labeled left to-right with the list of allowed-depths, and the rows are labeled top-to-bottom with the list of video inputs. Each cell is a bitmask containing zero or more true bits.
Window is asserted if a RenderVideo request may specify a window with the intersecting depth/visual as a destination and the intersecting video input as the source. If Pixmap is asserted at the intersection of an allowed-depth and a video input, then a RenderVideo request may specify a pixmap with that depth as a destination and that video input as the source (see RenderVideo for a discussion of pixmap pixel values.) There is guaranteed to be at least one Window or Pixmap assertion in every row. In addition, there is guaranteed to be no more than one Pixmap assertion for every unique depth, per row: this avoids ambiguity for the RENDERMODEL use of opaque and the red-, green- and blue-masks.
If a window is created with a visual id whose type is VideoGray or VideoColor, then the window's pixel are always undefined. A RenderVideo will not change window's pixel values (although the picture may still be visible).
For example, the following array means that a VW for video input #1 must be depth 12, TrueColor;
or depth 8, VideoColor; pixmaps of depth 24 can be used as a destination for a RenderVideo
request. The #2 video input can only support pixmaps of depth 12.
VideoColor depth 8 |
TrueColor depth 12 |
None depth 24 |
||
Video in | #1 #2 |
Window 0 |
Window Pixmap |
Pixmap 0 |
If Window is asserted, then a video output can capture the intersecting
depth/visual in a window, including the border. If Pixmap is asserted, then a pixmap
with the intersecting depth can be captured (see CaptureGraphics description of the
interpretation of pixmap pixel values). There is guaranteed to be at least one Window
or Pixmap assertion in every row. In addition, there is guaranteed to be no more
than one Pixmap assertion for every unique depth, per row: this avoids ambiguity
for the use of the red-, green- and blue-masks. If Composite is asserted then the
subwindow-mode for a CaptureGraphics request will be effectively IncludeInferiors,
regardless of the value specified in the request.
For example, if the video hardware has two video outputs, both are only capable of
capturing with a subwindow-mode of IncludeInferiors, one is able to capture depth
12, TrueColor, and one is able to capture depth 24 picmaps; then the array would
look like:
|
If input-overlap is false, then VIRs may not overlap; true otherwise. Capture-overlap
is false if the hardware does not allow VORs to overlap; true otherwise. lo-overlap element
is false if the hardware does not allow a VIR and a VOR to overlap; true otherwise. Note
that io overlap may be true, but video-output-models determines if a VIR may be captured
or not.
In all cases, if clip-size, input-overlap, or io-overlap constraints are
violated, the content of the violated regions is undefined; if capture-overlap or io-overlap
is violated, then the output signal for violated regions are undefined and is hardware
dependent; no errors are returned to the offending request, but a VideoViolation
event may be generated. In addition, other constraints that MVEX cannot express may be
violated, and a VideoViolation event may be generated. Requests that may violate
constraints are RenderVideo, CaptureGraphics, MapWindow, UnmapWindow,
MapSubwindows, UnmapSubwindows, ConfigureWindow, CirculateWindow,
DestroyWindow, DestroySubwindows, ReparentWindow.
The time element can be used to prevent race conditions between the delivery of
a VideoChange event and the use of some other MVEX requests. The value of the timestamp is
constant until there is a change in the availability of video inputs or outputs, and is
then replaced with the current server time.
The major and minor version numbers reflect the version of the extension run by the server.
Errors: Alloc, IDChoice, Video
This request creates the specified identifier handle for the video input or output
resource associated with reference. This resource is freed on connection close for
the creating client. The number of identifiers created for a single client corresponding
to a particular input or output is not limited to the concurrent-use for that resource as
supplied by QueryVideo. However, the number of ids in use at once is limited by the
concurrent use limit and ownership.
Errors: Match,Atom,Video
The ability of a setup may change at anytime, for example, because of a media change, and a VideoSetup event may be generated.
Attribute Type | |
full-motion | BOOL |
priority | CARD8 |
marker-type | None or FrameMarker or TimecodeMarker |
in-frame | FRAME |
out-frame | FRAME |
in-timecode | TIMECODE |
out-timecode | TIMECODE |
hue | FRACTION |
saturation | FRACTION |
brightness | FRACTION |
contrast | FRACTION |
The default values when attributes are not explicitly initialized are:
Attribute Type | |
Full-motion | TRUE |
priority | 100 |
marker-type | None |
in-frame | current point |
out-frame | never |
in-timecode | current point |
out-timecode | never |
hue | the normal-hue (see VIDEOABILITY) |
saturation | the normal-saturation (see VIDEOABILITY) |
brightness | the normal-brightness (see VIDEOABILITY) |
contrast | the normal-contrast (see VIDEOABILITY) |
Errors: Drawable, Value
Errors: Window, Video, Match
If the MVEX extension is present, this request will understand two new visual classes: VideoGray and VideoColor. These are used to create one instance of a video window (VW). Visuals specified in a CreateWindow request must be one supported by the screen, and may be supplied by QueryVideo.
Every MVEX event also contains the least-significant 16 bits of the sequence number of
the last request issued by the client that was (or is currently being) processed by the
server.
Home |
---|