@datafire/google_vision
v6.0.0
Published
DataFire integration for Cloud Vision API
Downloads
11
Readme
@datafire/google_vision
Client library for Cloud Vision API
Installation and Usage
npm install --save @datafire/google_vision
let google_vision = require('@datafire/google_vision').create({
access_token: "",
refresh_token: "",
client_id: "",
client_secret: "",
redirect_uri: ""
});
.then(data => {
console.log(data);
});
Description
Integrates Google Vision features, including image labeling, face, logo, and landmark detection, optical character recognition (OCR), and detection of explicit content, into applications.
Actions
oauthCallback
Exchange the code passed to your redirect URI for an access_token
google_vision.oauthCallback({
"code": ""
}, context)
Input
- input
object
- code required
string
- code required
Output
- output
object
- access_token
string
- refresh_token
string
- token_type
string
- scope
string
- expiration
string
- access_token
oauthRefresh
Exchange a refresh_token for an access_token
google_vision.oauthRefresh(null, context)
Input
This action has no parameters
Output
- output
object
- access_token
string
- refresh_token
string
- token_type
string
- scope
string
- expiration
string
- access_token
vision.files.annotate
Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported. This service will extract at most 5 (customers can specify which 5 in AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each file provided and perform detection and annotation for each image extracted.
google_vision.vision.files.annotate({}, context)
Input
- input
object
- body GoogleCloudVisionV1p2beta1BatchAnnotateFilesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
Output
vision.files.asyncBatchAnnotate
Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateFilesResponse
(results).
google_vision.vision.files.asyncBatchAnnotate({}, context)
Input
- input
object
- body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
Output
- output Operation
vision.images.annotate
Run image detection and annotation for a batch of images.
google_vision.vision.images.annotate({}, context)
Input
- input
object
- body GoogleCloudVisionV1p2beta1BatchAnnotateImagesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
Output
vision.images.asyncBatchAnnotate
Run asynchronous image detection and annotation for a list of images. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateImagesResponse
(results). This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto.
google_vision.vision.images.asyncBatchAnnotate({}, context)
Input
- input
object
- body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateImagesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
Output
- output Operation
vision.projects.locations.files.annotate
Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported. This service will extract at most 5 (customers can specify which 5 in AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each file provided and perform detection and annotation for each image extracted.
google_vision.vision.projects.locations.files.annotate({
"parent": ""
}, context)
Input
- input
object
- parent required
string
: Optional. Target project and location to make a call. Format:projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
. - body GoogleCloudVisionV1p2beta1BatchAnnotateFilesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
- parent required
Output
vision.projects.locations.files.asyncBatchAnnotate
Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateFilesResponse
(results).
google_vision.vision.projects.locations.files.asyncBatchAnnotate({
"parent": ""
}, context)
Input
- input
object
- parent required
string
: Optional. Target project and location to make a call. Format:projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
. - body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
- parent required
Output
- output Operation
vision.projects.locations.images.annotate
Run image detection and annotation for a batch of images.
google_vision.vision.projects.locations.images.annotate({
"parent": ""
}, context)
Input
- input
object
- parent required
string
: Optional. Target project and location to make a call. Format:projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
. - body GoogleCloudVisionV1p2beta1BatchAnnotateImagesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
- parent required
Output
vision.projects.locations.images.asyncBatchAnnotate
Run asynchronous image detection and annotation for a list of images. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains OperationMetadata
(metadata). Operation.response
contains AsyncBatchAnnotateImagesResponse
(results). This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto.
google_vision.vision.projects.locations.images.asyncBatchAnnotate({
"parent": ""
}, context)
Input
- input
object
- parent required
string
: Optional. Target project and location to make a call. Format:projects/{project-id}/locations/{location-id}
. If no parent is specified, a region will be chosen automatically. Supported location-ids:us
: USA country only,asia
: East asia areas, like Japan, Taiwan,eu
: The European Union. Example:projects/project-A/locations/eu
. - body GoogleCloudVisionV1p2beta1AsyncBatchAnnotateImagesRequest
- $.xgafv
string
(values: 1, 2): V1 error format. - access_token
string
: OAuth access token. - alt
string
(values: json, media, proto): Data format for response. - callback
string
: JSONP - fields
string
: Selector specifying which fields to include in a partial response. - key
string
: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. - oauth_token
string
: OAuth 2.0 token for the current user. - prettyPrint
boolean
: Returns response with indentations and line breaks. - quotaUser
string
: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. - upload_protocol
string
: Upload protocol for media (e.g. "raw", "multipart"). - uploadType
string
: Legacy upload protocol for media (e.g. "media", "multipart").
- parent required
Output
- output Operation
Definitions
AnnotateFileResponse
- AnnotateFileResponse
object
: Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.- responses
array
: Individual responses to images found within the file. This field will be empty if theerror
field is set.- items AnnotateImageResponse
- error Status
- inputConfig InputConfig
- totalPages
integer
: This field gives the total number of pages in the file.
- responses
AnnotateImageResponse
- AnnotateImageResponse
object
: Response to an image annotation request.- context ImageAnnotationContext
- cropHintsAnnotation CropHintsAnnotation
- error Status
- faceAnnotations
array
: If present, face detection has completed successfully.- items FaceAnnotation
- fullTextAnnotation TextAnnotation
- imagePropertiesAnnotation ImageProperties
- labelAnnotations
array
: If present, label detection has completed successfully.- items EntityAnnotation
- landmarkAnnotations
array
: If present, landmark detection has completed successfully.- items EntityAnnotation
- localizedObjectAnnotations
array
: If present, localized object detection has completed successfully. This will be sorted descending by confidence score. - logoAnnotations
array
: If present, logo detection has completed successfully.- items EntityAnnotation
- productSearchResults ProductSearchResults
- safeSearchAnnotation SafeSearchAnnotation
- textAnnotations
array
: If present, text (OCR) detection has completed successfully.- items EntityAnnotation
- webDetection WebDetection
AsyncAnnotateFileResponse
- AsyncAnnotateFileResponse
object
: The response for a single offline file annotation request.- outputConfig OutputConfig
AsyncBatchAnnotateFilesResponse
- AsyncBatchAnnotateFilesResponse
object
: Response to an async batch file annotation request.- responses
array
: The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
- responses
AsyncBatchAnnotateImagesResponse
- AsyncBatchAnnotateImagesResponse
object
: Response to an async batch image annotation request.- outputConfig OutputConfig
BatchAnnotateFilesResponse
- BatchAnnotateFilesResponse
object
: A list of file annotation responses.- responses
array
: The list of file annotation responses, each response corresponding to each AnnotateFileRequest in BatchAnnotateFilesRequest.- items AnnotateFileResponse
- responses
BatchOperationMetadata
- BatchOperationMetadata
object
: Metadata for the batch operations such as the current state. This is included in themetadata
field of theOperation
returned by theGetOperation
call of thegoogle::longrunning::Operations
service.- endTime
string
: The time when the batch request is finished and google.longrunning.Operation.done is set to true. - state
string
(values: STATE_UNSPECIFIED, PROCESSING, SUCCESSFUL, FAILED, CANCELLED): The current state of the batch operation. - submitTime
string
: The time when the batch request was submitted to the server.
- endTime
Block
- Block
object
: Logical element on the page.- blockType
string
(values: UNKNOWN, TEXT, TABLE, PICTURE, RULER, BARCODE): Detected block type (text, image etc) for this block. - boundingBox BoundingPoly
- confidence
number
: Confidence of the OCR results on the block. Range [0, 1]. - paragraphs
array
: List of paragraphs in this block (if this blocks is of type text).- items Paragraph
- property TextProperty
- blockType
BoundingPoly
- BoundingPoly
object
: A bounding polygon for the detected image annotation.- normalizedVertices
array
: The bounding polygon normalized vertices.- items NormalizedVertex
- vertices
array
: The bounding polygon vertices.- items Vertex
- normalizedVertices
Color
- Color
object
: Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness; for example, the fields of this representation can be trivially provided to the constructor of "java.awt.Color" in Java; it can also be trivially provided to UIColor's "+colorWithRed:green:blue:alpha" method in iOS; and, with just a little work, it can be easily formatted into a CSS "rgba()" string in JavaScript, as well. Note: this proto does not carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications SHOULD assume the sRGB color space. Note: when color equality needs to be decided, implementations, unless documented otherwise, will treat two colors to be equal if all their red, green, blue and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor_(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor_ = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...- alpha
number
: The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: pixel color = alpha * (this color) + (1.0 - alpha) * (background color) This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is to be rendered as a solid color (as if the alpha value had been explicitly given with a value of 1.0). - blue
number
: The amount of blue in the color as a value in the interval [0, 1]. - green
number
: The amount of green in the color as a value in the interval [0, 1]. - red
number
: The amount of red in the color as a value in the interval [0, 1].
- alpha
ColorInfo
- ColorInfo
object
: Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.- color Color
- pixelFraction
number
: The fraction of pixels the color occupies in the image. Value in range [0, 1]. - score
number
: Image-specific score for this color. Value in range [0, 1].
CropHint
- CropHint
object
: Single crop hint that is used to generate a new crop when serving an image.- boundingPoly BoundingPoly
- confidence
number
: Confidence of this being a salient region. Range [0, 1]. - importanceFraction
number
: Fraction of importance of this salient region with respect to the original image.
CropHintsAnnotation
- CropHintsAnnotation
object
: Set of crop hints that are used to generate new crops when serving images.- cropHints
array
: Crop hint results.- items CropHint
- cropHints
DetectedBreak
- DetectedBreak
object
: Detected start or end of a structural component.- isPrefix
boolean
: True if break prepends the element. - type
string
(values: UNKNOWN, SPACE, SURE_SPACE, EOL_SURE_SPACE, HYPHEN, LINE_BREAK): Detected break type.
- isPrefix
DetectedLanguage
- DetectedLanguage
object
: Detected language for a structural component.- confidence
number
: Confidence of detected language. Range [0, 1]. - languageCode
string
: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- confidence
DominantColorsAnnotation
- DominantColorsAnnotation
object
: Set of dominant colors and their corresponding scores.- colors
array
: RGB color values with their score and pixel fraction.- items ColorInfo
- colors
EntityAnnotation
- EntityAnnotation
object
: Set of detected entity features.- boundingPoly BoundingPoly
- confidence
number
: Deprecated. Usescore
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1]. - description
string
: Entity textual description, expressed in itslocale
language. - locale
string
: The language code for the locale in which the entity textualdescription
is expressed. - locations
array
: The location information for the detected entity. MultipleLocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.- items LocationInfo
- mid
string
: Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API. - properties
array
: Some entities may have optional user-suppliedProperty
(name/value) fields, such a score or string that qualifies the entity.- items Property
- score
number
: Overall score of the result. Range [0, 1]. - topicality
number
: The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
FaceAnnotation
- FaceAnnotation
object
: A face annotation object contains the results of face detection.- angerLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Anger likelihood. - blurredLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Blurred likelihood. - boundingPoly BoundingPoly
- detectionConfidence
number
: Detection confidence. Range [0, 1]. - fdBoundingPoly BoundingPoly
- headwearLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Headwear likelihood. - joyLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Joy likelihood. - landmarkingConfidence
number
: Face landmarking confidence. Range [0, 1]. - landmarks
array
: Detected face landmarks.- items Landmark
- panAngle
number
: Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180]. - rollAngle
number
: Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180]. - sorrowLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Sorrow likelihood. - surpriseLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Surprise likelihood. - tiltAngle
number
: Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180]. - underExposedLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Under-exposed likelihood.
- angerLikelihood
GcsDestination
- GcsDestination
object
: The Google Cloud Storage location where the output will be written to.- uri
string
: Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
- uri
GcsSource
- GcsSource
object
: The Google Cloud Storage location where the input will be read from.- uri
string
: Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
- uri
GoogleCloudVisionV1p1beta1AnnotateFileResponse
- GoogleCloudVisionV1p1beta1AnnotateFileResponse
object
: Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.- responses
array
: Individual responses to images found within the file. This field will be empty if theerror
field is set. - error Status
- inputConfig GoogleCloudVisionV1p1beta1InputConfig
- totalPages
integer
: This field gives the total number of pages in the file.
- responses
GoogleCloudVisionV1p1beta1AnnotateImageResponse
- GoogleCloudVisionV1p1beta1AnnotateImageResponse
object
: Response to an image annotation request.- context GoogleCloudVisionV1p1beta1ImageAnnotationContext
- cropHintsAnnotation GoogleCloudVisionV1p1beta1CropHintsAnnotation
- error Status
- faceAnnotations
array
: If present, face detection has completed successfully. - fullTextAnnotation GoogleCloudVisionV1p1beta1TextAnnotation
- imagePropertiesAnnotation GoogleCloudVisionV1p1beta1ImageProperties
- labelAnnotations
array
: If present, label detection has completed successfully. - landmarkAnnotations
array
: If present, landmark detection has completed successfully. - localizedObjectAnnotations
array
: If present, localized object detection has completed successfully. This will be sorted descending by confidence score. - logoAnnotations
array
: If present, logo detection has completed successfully. - productSearchResults GoogleCloudVisionV1p1beta1ProductSearchResults
- safeSearchAnnotation GoogleCloudVisionV1p1beta1SafeSearchAnnotation
- textAnnotations
array
: If present, text (OCR) detection has completed successfully. - webDetection GoogleCloudVisionV1p1beta1WebDetection
GoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse
- GoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse
object
: The response for a single offline file annotation request.- outputConfig GoogleCloudVisionV1p1beta1OutputConfig
GoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse
- GoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse
object
: Response to an async batch file annotation request.- responses
array
: The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.
- responses
GoogleCloudVisionV1p1beta1Block
- GoogleCloudVisionV1p1beta1Block
object
: Logical element on the page.- blockType
string
(values: UNKNOWN, TEXT, TABLE, PICTURE, RULER, BARCODE): Detected block type (text, image etc) for this block. - boundingBox GoogleCloudVisionV1p1beta1BoundingPoly
- confidence
number
: Confidence of the OCR results on the block. Range [0, 1]. - paragraphs
array
: List of paragraphs in this block (if this blocks is of type text). - property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
- blockType
GoogleCloudVisionV1p1beta1BoundingPoly
- GoogleCloudVisionV1p1beta1BoundingPoly
object
: A bounding polygon for the detected image annotation.- normalizedVertices
array
: The bounding polygon normalized vertices. - vertices
array
: The bounding polygon vertices.
- normalizedVertices
GoogleCloudVisionV1p1beta1ColorInfo
- GoogleCloudVisionV1p1beta1ColorInfo
object
: Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.- color Color
- pixelFraction
number
: The fraction of pixels the color occupies in the image. Value in range [0, 1]. - score
number
: Image-specific score for this color. Value in range [0, 1].
GoogleCloudVisionV1p1beta1CropHint
- GoogleCloudVisionV1p1beta1CropHint
object
: Single crop hint that is used to generate a new crop when serving an image.- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
- confidence
number
: Confidence of this being a salient region. Range [0, 1]. - importanceFraction
number
: Fraction of importance of this salient region with respect to the original image.
GoogleCloudVisionV1p1beta1CropHintsAnnotation
- GoogleCloudVisionV1p1beta1CropHintsAnnotation
object
: Set of crop hints that are used to generate new crops when serving images.- cropHints
array
: Crop hint results.
- cropHints
GoogleCloudVisionV1p1beta1DominantColorsAnnotation
- GoogleCloudVisionV1p1beta1DominantColorsAnnotation
object
: Set of dominant colors and their corresponding scores.- colors
array
: RGB color values with their score and pixel fraction.
- colors
GoogleCloudVisionV1p1beta1EntityAnnotation
- GoogleCloudVisionV1p1beta1EntityAnnotation
object
: Set of detected entity features.- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
- confidence
number
: Deprecated. Usescore
instead. The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1]. - description
string
: Entity textual description, expressed in itslocale
language. - locale
string
: The language code for the locale in which the entity textualdescription
is expressed. - locations
array
: The location information for the detected entity. MultipleLocationInfo
elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks. - mid
string
: Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API. - properties
array
: Some entities may have optional user-suppliedProperty
(name/value) fields, such a score or string that qualifies the entity. - score
number
: Overall score of the result. Range [0, 1]. - topicality
number
: The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
GoogleCloudVisionV1p1beta1FaceAnnotation
- GoogleCloudVisionV1p1beta1FaceAnnotation
object
: A face annotation object contains the results of face detection.- angerLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Anger likelihood. - blurredLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Blurred likelihood. - boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
- detectionConfidence
number
: Detection confidence. Range [0, 1]. - fdBoundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
- headwearLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Headwear likelihood. - joyLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Joy likelihood. - landmarkingConfidence
number
: Face landmarking confidence. Range [0, 1]. - landmarks
array
: Detected face landmarks. - panAngle
number
: Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180]. - rollAngle
number
: Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180]. - sorrowLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Sorrow likelihood. - surpriseLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Surprise likelihood. - tiltAngle
number
: Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180]. - underExposedLikelihood
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Under-exposed likelihood.
- angerLikelihood
GoogleCloudVisionV1p1beta1FaceAnnotationLandmark
- GoogleCloudVisionV1p1beta1FaceAnnotationLandmark
object
: A face-specific landmark (for example, a face feature).- position GoogleCloudVisionV1p1beta1Position
- type
string
(values: UNKNOWN_LANDMARK, LEFT_EYE, RIGHT_EYE, LEFT_OF_LEFT_EYEBROW, RIGHT_OF_LEFT_EYEBROW, LEFT_OF_RIGHT_EYEBROW, RIGHT_OF_RIGHT_EYEBROW, MIDPOINT_BETWEEN_EYES, NOSE_TIP, UPPER_LIP, LOWER_LIP, MOUTH_LEFT, MOUTH_RIGHT, MOUTH_CENTER, NOSE_BOTTOM_RIGHT, NOSE_BOTTOM_LEFT, NOSE_BOTTOM_CENTER, LEFT_EYE_TOP_BOUNDARY, LEFT_EYE_RIGHT_CORNER, LEFT_EYE_BOTTOM_BOUNDARY, LEFT_EYE_LEFT_CORNER, RIGHT_EYE_TOP_BOUNDARY, RIGHT_EYE_RIGHT_CORNER, RIGHT_EYE_BOTTOM_BOUNDARY, RIGHT_EYE_LEFT_CORNER, LEFT_EYEBROW_UPPER_MIDPOINT, RIGHT_EYEBROW_UPPER_MIDPOINT, LEFT_EAR_TRAGION, RIGHT_EAR_TRAGION, LEFT_EYE_PUPIL, RIGHT_EYE_PUPIL, FOREHEAD_GLABELLA, CHIN_GNATHION, CHIN_LEFT_GONION, CHIN_RIGHT_GONION, LEFT_CHEEK_CENTER, RIGHT_CHEEK_CENTER): Face landmark type.
GoogleCloudVisionV1p1beta1GcsDestination
- GoogleCloudVisionV1p1beta1GcsDestination
object
: The Google Cloud Storage location where the output will be written to.- uri
string
: Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples: * File Prefix: gs://bucket-name/here/filenameprefix The output files will be created in gs://bucket-name/here/ and the names of the output files will begin with "filenameprefix". * Directory Prefix: gs://bucket-name/some/location/ The output files will be created in gs://bucket-name/some/location/ and the names of the output files could be anything because there was no filename prefix specified. If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files.
- uri
GoogleCloudVisionV1p1beta1GcsSource
- GoogleCloudVisionV1p1beta1GcsSource
object
: The Google Cloud Storage location where the input will be read from.- uri
string
: Google Cloud Storage URI for the input file. This must only be a Google Cloud Storage object. Wildcards are not currently supported.
- uri
GoogleCloudVisionV1p1beta1ImageAnnotationContext
- GoogleCloudVisionV1p1beta1ImageAnnotationContext
object
: If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.- pageNumber
integer
: If the file was a PDF or TIFF, this field gives the page number within the file used to produce the image. - uri
string
: The URI of the file used to produce the image.
- pageNumber
GoogleCloudVisionV1p1beta1ImageProperties
- GoogleCloudVisionV1p1beta1ImageProperties
object
: Stores image properties, such as dominant colors.- dominantColors GoogleCloudVisionV1p1beta1DominantColorsAnnotation
GoogleCloudVisionV1p1beta1InputConfig
- GoogleCloudVisionV1p1beta1InputConfig
object
: The desired input location and metadata.- content
string
: File content, represented as a stream of bytes. Note: As with allbytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64. Currently, this field only works for BatchAnnotateFiles requests. It does not work for AsyncBatchAnnotateFiles requests. - gcsSource GoogleCloudVisionV1p1beta1GcsSource
- mimeType
string
: The type of the file. Currently only "application/pdf", "image/tiff" and "image/gif" are supported. Wildcards are not supported.
- content
GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation
- GoogleCloudVisionV1p1beta1LocalizedObjectAnnotation
object
: Set of detected objects with bounding boxes.- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
- languageCode
string
: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. - mid
string
: Object ID that should align with EntityAnnotation mid. - name
string
: Object name, expressed in itslanguage_code
language. - score
number
: Score of the result. Range [0, 1].
GoogleCloudVisionV1p1beta1LocationInfo
- GoogleCloudVisionV1p1beta1LocationInfo
object
: Detected entity location information.- latLng LatLng
GoogleCloudVisionV1p1beta1NormalizedVertex
- GoogleCloudVisionV1p1beta1NormalizedVertex
object
: A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.- x
number
: X coordinate. - y
number
: Y coordinate.
- x
GoogleCloudVisionV1p1beta1OperationMetadata
- GoogleCloudVisionV1p1beta1OperationMetadata
object
: Contains metadata for the BatchAnnotateImages operation.- createTime
string
: The time when the batch request was received. - state
string
(values: STATE_UNSPECIFIED, CREATED, RUNNING, DONE, CANCELLED): Current state of the batch operation. - updateTime
string
: The time when the operation result was last updated.
- createTime
GoogleCloudVisionV1p1beta1OutputConfig
- GoogleCloudVisionV1p1beta1OutputConfig
object
: The desired output location and metadata.- batchSize
integer
: The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will be generated. Ifbatch_size
= 20, then 5 json files each containing 20 response protos will be written under the prefixgcs_destination
.uri
. Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations. - gcsDestination GoogleCloudVisionV1p1beta1GcsDestination
- batchSize
GoogleCloudVisionV1p1beta1Page
- GoogleCloudVisionV1p1beta1Page
object
: Detected page from OCR.- blocks
array
: List of blocks of text, images etc on this page. - confidence
number
: Confidence of the OCR results on the page. Range [0, 1]. - height
integer
: Page height. For PDFs the unit is points. For images (including TIFFs) the unit is pixels. - property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
- width
integer
: Page width. For PDFs the unit is points. For images (including TIFFs) the unit is pixels.
- blocks
GoogleCloudVisionV1p1beta1Paragraph
- GoogleCloudVisionV1p1beta1Paragraph
object
: Structural unit of text representing a number of words in certain order.- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly
- confidence
number
: Confidence of the OCR results for the paragraph. Range [0, 1]. - property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
- words
array
: List of all words in this paragraph.
GoogleCloudVisionV1p1beta1Position
- GoogleCloudVisionV1p1beta1Position
object
: A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.- x
number
: X coordinate. - y
number
: Y coordinate. - z
number
: Z coordinate (or depth).
- x
GoogleCloudVisionV1p1beta1Product
- GoogleCloudVisionV1p1beta1Product
object
: A Product contains ReferenceImages.- description
string
: User-provided metadata to be stored with this product. Must be at most 4096 characters long. - displayName
string
: The user-provided name for this Product. Must not be empty. Must be at most 4096 characters long. - name
string
: The resource name of the product. Format is:projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID
. This field is ignored when creating a product. - productCategory
string
: Immutable. The category for the product identified by the reference image. This should be one of "homegoods-v2", "apparel-v2", "toys-v2", "packagedgoods-v1" or "general-v1". The legacy categories "homegoods", "apparel", and "toys" are still supported, but these should not be used for new products. - productLabels
array
: Key-value pairs that can be attached to a product. At query time, constraints can be specified based on the product_labels. Note that integer values can be provided as strings, e.g. "1199". Only strings with integer values can match a range-based restriction which is to be supported soon. Multiple values can be assigned to the same key. One product may have up to 500 product_labels. Notice that the total number of distinct product_labels over all products in one ProductSet cannot exceed 1M, otherwise the product search pipeline will refuse to work for that ProductSet.
- description
GoogleCloudVisionV1p1beta1ProductKeyValue
- GoogleCloudVisionV1p1beta1ProductKeyValue
object
: A product label represented as a key-value pair.- key
string
: The key of the label attached to the product. Cannot be empty and cannot exceed 128 bytes. - value
string
: The value of the label attached to the product. Cannot be empty and cannot exceed 128 bytes.
- key
GoogleCloudVisionV1p1beta1ProductSearchResults
- GoogleCloudVisionV1p1beta1ProductSearchResults
object
: Results for a product search request.- indexTime
string
: Timestamp of the index which provided these results. Products added to the product set and products removed from the product set after this time are not reflected in the current results. - productGroupedResults
array
: List of results grouped by products detected in the query image. Each entry corresponds to one bounding polygon in the query image, and contains the matching products specific to that region. There may be duplicate product matches in the union of all the per-product results. - results
array
: List of results, one for each product match.
- indexTime
GoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult
- GoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult
object
: Information about the products similar to a single product in a query image.- boundingPoly GoogleCloudVisionV1p1beta1BoundingPoly
- objectAnnotations
array
: List of generic predictions for the object in the bounding box. - results
array
: List of results, one for each product match.
GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation
- GoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation
object
: Prediction for what the object in the bounding box is.- languageCode
string
: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. - mid
string
: Object ID that should align with EntityAnnotation mid. - name
string
: Object name, expressed in itslanguage_code
language. - score
number
: Score of the result. Range [0, 1].
- languageCode
GoogleCloudVisionV1p1beta1ProductSearchResultsResult
- GoogleCloudVisionV1p1beta1ProductSearchResultsResult
object
: Information about a product.- image
string
: The resource name of the image from the product that is the closest match to the query. - product GoogleCloudVisionV1p1beta1Product
- score
number
: A confidence level on the match, ranging from 0 (no confidence) to 1 (full confidence).
- image
GoogleCloudVisionV1p1beta1Property
- GoogleCloudVisionV1p1beta1Property
object
: AProperty
consists of a user-supplied name/value pair.- name
string
: Name of the property. - uint64Value
string
: Value of numeric properties. - value
string
: Value of the property.
- name
GoogleCloudVisionV1p1beta1SafeSearchAnnotation
- GoogleCloudVisionV1p1beta1SafeSearchAnnotation
object
: Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).- adult
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities. - medical
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Likelihood that this is a medical image. - racy
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas. - spoof
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive. - violence
string
(values: UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY): Likelihood that this image contains violent content.
- adult
GoogleCloudVisionV1p1beta1Symbol
- GoogleCloudVisionV1p1beta1Symbol
object
: A single symbol representation.- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly
- confidence
number
: Confidence of the OCR results for the symbol. Range [0, 1]. - property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
- text
string
: The actual UTF-8 representation of the symbol.
GoogleCloudVisionV1p1beta1TextAnnotation
- GoogleCloudVisionV1p1beta1TextAnnotation
object
: TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.- pages
array
: List of pages detected by OCR. - text
string
: UTF-8 text detected on the pages.
- pages
GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak
- GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak
object
: Detected start or end of a structural component.- isPrefix
boolean
: True if break prepends the element. - type
string
(values: UNKNOWN, SPACE, SURE_SPACE, EOL_SURE_SPACE, HYPHEN, LINE_BREAK): Detected break type.
- isPrefix
GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage
- GoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage
object
: Detected language for a structural component.- confidence
number
: Confidence of detected language. Range [0, 1]. - languageCode
string
: The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- confidence
GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
- GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
object
: Additional information detected on the structural component.- detectedBreak GoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak
- detectedLanguages
array
: A list of detected languages together with confidence.
GoogleCloudVisionV1p1beta1Vertex
- GoogleCloudVisionV1p1beta1Vertex
object
: A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.- x
integer
: X coordinate. - y
integer
: Y coordinate.
- x
GoogleCloudVisionV1p1beta1WebDetection
- GoogleCloudVisionV1p1beta1WebDetection
object
: Relevant information for the image from the Internet.- bestGuessLabels
array
: The service's best guess as to the topic of the request image. Inferred from similar images on the open web. - fullMatchingImages
array
: Fully matching images from the Internet. Can include resized copies of the query image. - pagesWithMatchingImages
array
: Web pages containing the matching images from the Internet. - partialMatchingImages
array
: Partial matching images from the Internet. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops. - visuallySimilarImages
array
: The visually similar image results. - webEntities
array
: Deduced entities from similar images on the Internet.
- bestGuessLabels
GoogleCloudVisionV1p1beta1WebDetectionWebEntity
- GoogleCloudVisionV1p1beta1WebDetectionWebEntity
object
: Entity deduced from similar images on the Internet.- description
string
: Canonical description of the entity, in English. - entityId
string
: Opaque entity ID. - score
number
: Overall relevancy score for the entity. Not normalized and not comparable across different image queries.
- description
GoogleCloudVisionV1p1beta1WebDetectionWebImage
- GoogleCloudVisionV1p1beta1WebDetectionWebImage
object
: Metadata for online images.- score
number
: (Deprecated) Overall relevancy score for the image. - url
string
: The result image URL.
- score
GoogleCloudVisionV1p1beta1WebDetectionWebLabel
- GoogleCloudVisionV1p1beta1WebDetectionWebLabel
object
: Label to provide extra metadata for the web detection.- label
string
: Label for extra metadata. - languageCode
string
: The BCP-47 language code forlabel
, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
- label
GoogleCloudVisionV1p1beta1WebDetectionWebPage
- GoogleCloudVisionV1p1beta1WebDetectionWebPage
object
: Metadata for web pages.- fullMatchingImages
array
: Fully matching images on the page. Can include resized copies of the query image. - pageTitle
string
: Title for the web page, may contain HTML markups. - partialMatchingImages
array
: Partial matching images on the page. Those images are similar enough to share some key-point features. For example an original image will likely have partial matching for its crops. - score
number
: (Deprecated) Overall relevancy score for the web page. - url
string
: The result web page URL.
- fullMatchingImages
GoogleCloudVisionV1p1beta1Word
- GoogleCloudVisionV1p1beta1Word
object
: A word representation.- boundingBox GoogleCloudVisionV1p1beta1BoundingPoly
- confidence
number
: Confidence of the OCR results for the word. Range [0, 1]. - property GoogleCloudVisionV1p1beta1TextAnnotationTextProperty
- symbols
array
: List of symbols in the word. The order of the symbols follows the natural reading order.
GoogleCloudVisionV1p2beta1AnnotateFileRequest
- GoogleCloudVisionV1p2beta1AnnotateFileRequest
object
: A request to annotate one single file, e.g. a PDF, TIFF or GIF file.- features
array
: Required. Requested features. - imageContext GoogleCloudVisionV1p2beta1ImageContext
- inputConfig GoogleCloudVisionV1p2beta1InputConfig
- pages
array
: Pages of the file to perform image annotation. Pages starts from 1, we assume the first page of the file is page 1. At most 5 pages are supported per request. Pages can be negative. Page 1 means the first page. Page 2 means the second page. Page -1 means the last page. Page -2 means the second to the last page. If the file is GIF instead of PDF or TIFF, page refers to GIF frames. If this field is empty, by default the service performs image annotation for the first 5 pages of the file.- items
integer
- items
- features
GoogleCloudVisionV1p2beta1AnnotateFileResponse
- GoogleCloudVisionV1p2beta1AnnotateFileResponse
object
: Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.- responses
array
: Individual responses to images found within the file. This field will be empty if theerror
field is set. - error Status
- inputConfig GoogleCloudVisionV1p2beta1InputConfig
- totalPages
integer
: This field gives the total number of pages in the file.
- responses
GoogleCloudVisionV1p2beta1AnnotateImageRequest
- GoogleCloudVisionV1p2beta1AnnotateImageRequest
object
: Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features, and with context information.- features
array
: Requested features. - image GoogleCloudVisionV1p2beta1Image
- imageContext GoogleCloudVisionV1p2beta1ImageContext
- features
GoogleCloudVisionV1p2beta1AnnotateImageResponse
- GoogleCloudVisionV1p2beta1AnnotateImageResponse
object
: Response to an image annotation request.- context GoogleCloudVisionV1p2beta1ImageAnnotationContext
- cropHintsAnnotation GoogleCloudVisionV1p2beta1CropHintsAnnotation
- error Status