Skip to main content

Datasheet Parameter Extraction in Plain GraphQL

This is the raw GraphQL queries and mutations to import and process a datasheet with Copilot, identify the components in the datasheet, and then extract one of the components into a Flux project with custom electrical parameters you specify. with some manual steps you'll need to do in between. The manual bits are mostly copying values from query results into the next, but also uploading a datasheet file to a provided URL. You'll need a tool that supports the HTTP PUT operation. This guide uses curl, which is available on most systems.

tip

Try these steps out in your browser with our GraphQL web client! Instructions and usage tips.

Step 1: Get your hands on a suitable datasheet.

For this guide, we'll use the datasheet for the Alpha & Omega AO3422 N-Channel Enhancement Mode Field Effect Transistor. Download it to your local system, and make note of the download path; for us, the file was saved at "~/Downloads/AO3422.pdf".

Step 2: Make sure you have your Flux API key handy.

See how to generate your very own API key and how authentication and authorization work in the Flux API.

Step 3: (Optional) Get your organization's uid.

If you intend to create a project within an organization instead of your personal profile, you'll need to grab the uid for your organization to use in the next step.

Replace the "an-example-org" string with your own organization's handle, which is displayed in the URL path and on the left side-bar when you visit your organizations's profile page. In this example, it is https://www.flux.ai/an-example-org.

request
query getOrganizationUid {
organization(by: "an-example-org") {
uid
displayName
}
}
}
response
{
"data": {
"organization": {
"uid": "org_4508d7a8-2cc8-4961-9bd7-ca375aa3f75f",
"displayName": "An Example Organization"
}
}
}

Step 4: Request a new datasheet extraction job.

To start an extraction, you'll first need to reserve a job identifier. This requires you to identify the mime type of datasheet file you plan to upload, and (optionally) the organization uid that you want to perform the extraction within. If you instead want the extraction to be performed within your personal profile, just omit the organizationHandleOrUid parameter entirely.

Make note of the resulting uploadUrl and jobUid for the next steps.

request
mutation startExtractDatasheetJob {
startExtractDatasheetJob(contentMimeType: "application/pdf", organizationHandleOrUid: "org_4508d7a8-2cc8-4961-9bd7-ca375aa3f75f" ) {
...on StartExtractDatasheetJobSuccess {
uploadUrl
jobUid
result
}
...on StartJobFailure {
result
failureReason
}
}
}
response
{
"data": {
"startExtractDatasheetJob": {
"uploadUrl": "https://cdn2.flux.ai/copilot-datasheets-sources/j16jxYxfW4hPzOQMSJR2LHJuStp1/get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39/get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39.pdf?GoogleAccessId=graviton-mvp%40appspot.gserviceaccount.com&Expires=1718904378&Signature=IijN4Hs9amQBwpzwOTff3UC7Qx0ENZx8jjOL56SskRViA%2B1LK2hJqxQT1zEiQ4dYEsKj7ixKr5maChpjs4ygz4%2FczCcL9U3d96UK3%2F4h0fBUB2kOUmO3O3FvMkn8ptN%2F7SGoP7XNQ5%2FJzV%2BH8dIas9WqDw9RuCnrp0KbUWZD8T97wXPLXJO4hhtz1H8BlbZFCoPMXWmTuvnCJcapRiLYR5LGxkE5wh7Fr31LIDtVfV9t8pDIHLkEGWWmY5mGaXcLi%2FsoQufuJa%2FtS3ZMKUIoh9dy%2F2u55QbDsqzbkxIRCFArp2BaUHJVyEQ6SfQopIsf0gMVuRZhmUozkUxMiuteiw%3D%3D",
"jobUid": "get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39",
"result": "success"
}
}
}

Step 5: Upload the datasheet.

Using the uploadUrl result from the last step, upload the datasheet PDF to that location.

You'll need a tool that is capable of uploading the file using the HTTP PUT method. The commonly available curl tool is one such tool that we'll use in this guide.

If you're using curl to do your own upload, it should look something like the following, but be sure to use the correct path to the datasheet you downloaded in step 1 in place of $LOCAL_FILE_PATH and the value of uploadUrl from the previous step's GraphQL response in place of $UPLOAD_URL.

curl --upload-file $LOCAL_FILE_PATH $UPLOAD_URL

The full curl command for our upload looks like the following:

curl --upload-file ~/Downloads/AO3422.pdf https://cdn2.flux.ai/copilot-datasheets-sources/j16jxYxfW4hPzOQMSJR2LHJuStp1/get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39/get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39.pdf?GoogleAccessId=graviton-mvp%40appspot.gserviceaccount.com&Expires=1718904378&Signature=IijN4Hs9amQBwpzwOTff3UC7Qx0ENZx8jjOL56SskRViA%2B1LK2hJqxQT1zEiQ4dYEsKj7ixKr5maChpjs4ygz4%2FczCcL9U3d96UK3%2F4h0fBUB2kOUmO3O3FvMkn8ptN%2F7SGoP7XNQ5%2FJzV%2BH8dIas9WqDw9RuCnrp0KbUWZD8T97wXPLXJO4hhtz1H8BlbZFCoPMXWmTuvnCJcapRiLYR5LGxkE5wh7Fr31LIDtVfV9t8pDIHLkEGWWmY5mGaXcLi%2FsoQufuJa%2FtS3ZMKUIoh9dy%2F2u55QbDsqzbkxIRCFArp2BaUHJVyEQ6SfQopIsf0gMVuRZhmUozkUxMiuteiw%3D%3D
tip

The upload URL returned in the previous response is only valid for 1 hour. If you try to upload after that hour has elapsed, your upload will be denied, and you'll need to repeat the previous step to get a new upload URL and a new job uid.

tip

Also, repeated uploads will be ignored (even if completed successfully). Only the first completed upload to the URL will be recognized and processed by Copilot.

Step 6: Monitor the extraction job.

Once the file has been uploaded, Copilot will automatically begin processing it to identify components represented in the datasheet that are available for import.

You'll want to keep repeating this query until you get a "status": "COMPLETE". Note that processing of very large datasheets or during times of high system load can be expected to take minutes or tens of minutes.

Replace the jobUid request parameter with the one in the response from step 4.

Make note of the values returned in the jobData object for use in the next step. Note that although our example contains a single object returned representing a single component in the datasheet, the jobData is a list, and may contain multiple unique components that were found in the datasheet.

request
# Get the status of a running job to find the components in a datasheet
query myExtractDatasheetJob {
me {
job(jobUid: "get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39") {
...on ExtractDatasheetJob {
jobUid
pdfUid
jobStatus {
...on JobWaitingForUploadStatus {
status,
uploadLocation
}
...on JobFailedStatus {
status
reason
}
...on JobPendingStatus {
status
}
...on JobRunningStatus {
status
}
...on JobVerifiedStatus {
status
verifiedLocation
}
...on ExtractDatasheetJobCompleteStatus {
status
jobData {
pinCount
partNumber
packageFamily
}
}
}
}
}
}
}
response
{
"data": {
"me": {
"job": {
"jobUid": "get-ds-comp-304fd7e6-dfd1-4582-8faa-5752bf53ee39",
"pdfUid": "pdf-upload-45ff485d6bbe3b8cf1e94fd9814e9aa7b0631a0315b3388059f187c2516ee94c",
"jobStatus": {
"status": "COMPLETE",
"jobData": [
{
"pinCount": 3,
"partNumber": "AO3422",
"packageFamily": "SOT23"
}
]
}
}
}
}
}
tip

GraphQL subscriptions are coming soon for this endpoint. You'll need to rely on polling for now.

Step 7: Import a component from the datasheet into a Flux project.

Now we're down to most interesting parts! We'll use the values that identify a component found in the datasheet to import the component into a Flux project.

As noted in the step above, one component was found in the example datasheet, so we'll use the values for that to extract it, but some datasheets have multiple components available for extraction, and you'll need to choose one at a time from the returned list to import into a Flux project. You can import as many as you want, but you'll need to perform a separate startImportComponentFromDatasheetJob for each of them, and each job will result in a separate Flux project.

In this request, replace the pdfUid with the one you've been using previously, and then also the targetComponent object with one that was found in the datasheet you uploaded. If you wantto create the projects in an organization account, make sure to set the organizationHandleOrUid in the mutation, and if you want them in your personal profile, omit this parameter.

There are a number of additional parameters available to control how Copilot builds the project from the datasheet component. In our example, we'll import two projects from the single datasheet component. For both projects, we'll allow Copilot to determine the part type and a set of properties appropriate for that part type to extract. We'll also provide additional properties in the mutation that are of particular interest to us. One of the imports will use the experimental Copilot Chart Understanding feature, and the other will not. The additional properties we are interested in are specified in the propertiesForExtraction list, and the chart understanding is toggled with the enableChartExtraction option. When enabling the chart extraction, it tends to perform best with table extraction disabled, so we do that here with the enableTableExtraction flag.

danger

If Copilot Chart Understanding is available for yourself or your organization, you must explicitly enable or disable it with the enableChartExtraction option.

tip

Note the special syntax used for parameters you want to extract from charts, e.g. Q_G@7.5VGS, they contain the value you want to extract, followed by an @ symbol, followed by the value you want to extract it at.

This is syntax that we plan to expand and enhance in the future.

Make note of the jobUid specified in the response for each import mutation you send for use in the next step.

request (no experimental features)
# Start a job to import a specific component and package into a custom project template
mutation importComponentFromDatasheetJobNoExperimentalFeatures {
startImportComponentFromDatasheetJob(

pdfUid: "pdf-upload-45ff485d6bbe3b8cf1e94fd9814e9aa7b0631a0315b3388059f187c2516ee94c",
organizationHandleOrUid: "org_4508d7a8-2cc8-4961-9bd7-ca375aa3f75f"

targetComponent: {
pinCount: 3
partNumber: "AO3422"
packageFamily: "SOT23"
}

options: {
enableChartExtraction: false
enableTableExtraction: true
}

propertiesForExtraction: [
{ name: "Gfs" }
]

) {
...on StartJobFailure {
failureReason
result
}
...on StartImportComponentsJobSuccess {
result
jobUid
}
}
}
response (no experimental features)
{
"data": {
"startImportComponentFromDatasheetJob": {
"result": "success",
"jobUid": "import-ds-comp-995df29b-89fb-4131-802b-740442b05307"
}
}
}
request (with chart understanding)
# Start a job to import a specific component and package into a custom project template
mutation importComponentFromDatasheetJobWithChartUnderstanding {

startImportComponentFromDatasheetJob(

pdfUid: "pdf-upload-45ff485d6bbe3b8cf1e94fd9814e9aa7b0631a0315b3388059f187c2516ee94c"
organizationHandleOrUid: "org_4508d7a8-2cc8-4961-9bd7-ca375aa3f75f"

targetComponent: {
pinCount: 3
partNumber: "AO3422"
packageFamily: "SOT23"
}

options: {
enableChartExtraction: true
enableTableExtraction: false
}

propertiesForExtraction: [
{ name: "Q_G@7.5VGS" }
{ name: "Q_G@3VGS" }
{ name: "R_DS_ON@6.5VGS" }
]

) {
...on StartJobFailure {
failureReason
result
}
...on StartImportComponentsJobSuccess {
result
jobUid
}
}
}
response (with chart understanding)
{
"data": {
"startImportComponentFromDatasheetJob": {
"result": "success",
"jobUid": "import-ds-comp-768a0b96-bb4d-48c4-972a-9f608fa4ffbb"
}
}
}

Step 8: Monitor the import job(s).

Similar to step 6, we now check each job status to monitor for completion. We'll combine both of our jobs into a single query for ease of monitoring.

Replace the jobUid parameters in the request with the jobUid result values from the last step.

request
# 4. Get the status of the import job, after which we'll have the details of the imported project
query myImportComponentsJob {
me {
noExperimentalFeatures: job(jobUid: "import-ds-comp-995df29b-89fb-4131-802b-740442b05307") {
...on ImportComponentsJob {
jobUid
pdfUid
jobStatus {
...on JobFailedStatus {
status
reason
}
...on JobPendingStatus {
status
}
...on JobRunningStatus {
status
}
...on ImportComponentsJobCompleteStatus {
status
importComponentsJobData: jobData {
importedToProjectUid
projectSlug
projectOwnerHandle
}
}
}
}
}
withChartExtraction: job(jobUid: "import-ds-comp-768a0b96-bb4d-48c4-972a-9f608fa4ffbb") {
...on ImportComponentsJob {
jobUid
pdfUid
jobStatus {
...on JobFailedStatus {
status
reason
}
...on JobPendingStatus {
status
}
...on JobRunningStatus {
status
}
...on ImportComponentsJobCompleteStatus {
status
importComponentsJobData: jobData {
importedToProjectUid
projectSlug
projectOwnerHandle
}
}
}
}
}
}
}
response
{
"data": {
"me": {
"noExperimentalFeatures": {
"jobUid": "import-ds-comp-995df29b-89fb-4131-802b-740442b05307",
"pdfUid": "pdf-upload-45ff485d6bbe3b8cf1e94fd9814e9aa7b0631a0315b3388059f187c2516ee94c",
"jobStatus": {
"status": "COMPLETE",
"importComponentsJobData": {
"importedToProjectUid": "ccb09ba1-c732-4834-947e-42700bd5b2fe",
"projectSlug": "ao3422",
"projectOwnerHandle": "an-example-org"
}
}

},
"withChartExtraction": {
"jobUid": "import-ds-comp-768a0b96-bb4d-48c4-972a-9f608fa4ffbb",
"pdfUid": "pdf-upload-45ff485d6bbe3b8cf1e94fd9814e9aa7b0631a0315b3388059f187c2516ee94c",
"jobStatus": {
"status": "COMPLETE",
"importComponentsJobData": {
"importedToProjectUid": "d7e241ea-654b-4d89-94bd-2d8c5292c780",
"projectSlug": "ao3422-b38f",
"projectOwnerHandle": "an-example-org"
}
}
}
}
}
}

Step 9: Take a look at the results.

The final step! Using the projectOwnerHandle and projectSlug from each import job result in the last step you can view the parameters either in the Flux app, or retrieve the data through the API.

Combine the projectOwnerHandle and projectSlug into a string, separating them with a slash:

`${projectOwnerHandle}/${projectSlug}`

For our two sample projects we get an-example-org/ao3422 and an-example-org/ao3422-b38f. Note that Copilot will try to name the project for the part that was imported. If the name is already taken (as with our second example), it will add a unique string to the name to differentiate.

These can be viewed in the Flux app at:

tip

Don't worry, your projects will not be public by default! These two projects have been shared publicly to help illustrate this guide.

Alternatively, if you swap your full project slugs into the following query, you can retrieve the full list of electrical properties that Copilot extracted via the API. Voila!

request
# 5. Get all the properties of a project
query projectProperties {

noExperimentalFeatures: project(by: "an-example-org/ao3422") {
properties {
name
value
unit
}
},

withChartUnderstanding: project(by: "an-example-org/ao3422-b38f") {
properties {
name
value
unit
}
}

}
response
{
"data": {
"withChartUnderstanding": {
"properties": [
{
"name": "Gfs",
"value": "11",
"unit": "S"
},
{
"name": "Continuous Drain Current",
"value": "2.1",
"unit": "A"
},
{
"name": "Current Rating",
"value": "2.1",
"unit": ""
},
## ... many omitted for brevity ... ##
]
},
"noExperimentalFeatures": {
"properties": [
"properties": [
{
"name": "Q G 3 VGS",
"value": "1.77",
"unit": "nC"
},
{
"name": "R DS ON 6 5 VGS",
"value": "121.87",
"unit": "mΩ"
},
{
"name": "Continuous Drain Current",
"value": "2.1",
"unit": ""
},
{
"name": "Current Rating",
"value": "2.1",
"unit": "A"
},
## ... many omitted for brevity ... ##
]
}
}
}