Skip to content

Dataset Management

Datasets Overview

To view and browse all loaded datasets, go to the Datasets view by clicking on the list icon in the left sidebar. The Datasets view provides a list of all datasets loaded at the moment, including some metadata like the status, a tag, the time of creation, or the content updated date, as well as the number of entities and statements. The Search field in the upper right corner of the Datasets overview allows you to search for a particular dataset within all loaded datasets.

For each dataset, you can continue with several actions such as viewing the Dataset details by clicking , or directly browsing the dataset by clicking the Table view icon or Tree view icon . More information on browsing datasets can be found in the Search & Browse section.

Dataset Properties

The Dataset details view gives you an overview of the following dataset properties.

Property Description
Name A descriptive title that is displayed in the Datasets view.
Tag A short and unique mnemonic abbreviation or code for the dataset. The tag is used as a shortcut throughout Accurids, e.g., in the search result display or search filters.
Description An informative description of the dataset.
Color The color of the badge that indicates the dataset throughout different Accurids screens.
Load status A dataset must be loaded before you can manage it.
loading completed successfully
loading in progress
loading not yet started
loading failed
Index status A dataset must be successfully indexed before you can work with its content.
indexing completed successfully
indexing in progress
indexing not started yet
indexing failed
Analysis status A dataset is analyzed regarding structure and quality.
analysis completed successfully
analysis in progress
analysis not started yet
analysis failed
Created The date and time when the dataset was initialized.
Created by The user who created the dataset.
Updated The date and time when the metadata of the dataset was last changed.
Last updated by The user who last changed the dataset metadata.
Content updated The date and time when the content of the dataset was last changed.
Content updated by The user who last changed the dataset content.
Storage Size The storage size of the dataset.
Entities The number of indexed entities.
Statements The number of triples.
Unique predicates The number of unique predicates.
Mappings The total number of mapping triples contained in the dataset.
Hierarchy properties The list of detected hierarchical properties such as rdfs:subClassOf or skos:broader. The value is empty if no hierarchical property has been found in the dataset.
Cycles The number of cycles formed by some hierarchical property.
ID Generator ID Generator associated with the dataset. Only available with the PID Generator Module.

Working with Datasets

Users in Accurids can be given a specific role based on which particular actions in Accurids are restricted or allowed. Regarding dataset management, standard users can not upload or edit any dataset. Contributors can upload datasets and edit or delete those. Admins can upload, edit or delete any dataset that has been loaded, independent of who uploaded the dataset in the first place. More information on user roles can be found in the Platform Administration section.

Creating a New Dataset

When on the Datasets view, click in the upper right corner to open the dataset creation wizard.

The wizard guides you through two steps:

  1. Add Sources: Build the list of sources that should be imported into Accurids.
  2. Configure & Create: Decide whether the sources should become one dataset or multiple datasets, then review the dataset metadata and visibility.

You can combine multiple sources in the same wizard run. To add a source, click + Add Source, choose the source type, configure it, and click Add.

In the second step, choose one of the following creation modes:

  • Bulk Create: Creates one dataset per source. You can review and adjust the generated dataset name and tag for each source before creating them.
  • Single Dataset: Combines all selected sources into one dataset. You must specify the dataset name and tag and can optionally add a description.

You can also choose the dataset visibility during creation. Depending on your system configuration, datasets can be created as Internal, Private, or Public.

If your sources include CSV/JSON files, a database endpoint, or a Veeva endpoint, you must also add a transformation file (.rqg) before you can continue. When a transformation file is included, Accurids creates a single dataset and bulk creation is not available.

Finally, click Create Dataset(s). File uploads may continue in the background while the wizard waits for them to finish. You can monitor the loading and indexing progress in the Datasets view. After successful ingesting and indexing, you can start searching in the dataset.

Using File Upload

Choose File Upload to add one or more local files. You can drag and drop files or click to browse.

Supported formats include RDF files such as Turtle, RDF/XML, N-Triples, N-Quads, TriG, JSON-LD, and OWL, as well as CSV, JSON, transformation files (.rqg), and supported archive formats.

Using URI

Choose URI Import to provide one or more dataset URLs. Each URI is added as a separate source.

Remote Accurids

Choose Remote Accurids to import datasets from another Accurids instance. You can select an existing remote connection or create a new one, test the connection, and then choose one or more datasets from the remote instance to add as sources.

Configuration of a Remote Accurids Endpoint

To enter the configuration for a Remote Accurids Endpoint click Create New Connection and fill the required parameters:

  • Name: A descriptive label for the connection.
  • URL: The base URL of the remote Accurids instance (e.g., https://remote.accurids.com).
  • API Key: The API key used to authenticate with the remote instance. API keys are stored securely on the server and are never displayed back to the user.

Use the Test Connection button to verify that the connection to the remote instance is working correctly before saving.

Using SPARQL

Choose SPARQL Endpoint to import data from a SPARQL service. You can select an existing endpoint or create a new one, test the connection, and then choose one or more graphs to add as sources.

Configuration of a SPARQL Endpoint

To enter the configuration for a SPARQL Endpoint click Create New Endpoint and fill the required parameters:

  • Name: A descriptive label for the endpoint.
  • URL: The SPARQL endpoint URL.
  • Authorization Type (optional): Choose whether the endpoint should be accessed without authentication, with basic authorization, or with an API key.
  • Username and Password: Required when using basic authorization.
  • Add to, Key, and Value: Required when using API key authentication to define where the key is sent and which value should be used.

Use the Test Connection button to verify that the endpoint can be reached before saving.

Using Relational Database

Choose Database to import data from a relational database. You can select an existing database endpoint or create a new one and test the connection before adding it as a source.

A transformation file (.rqg) is required for database imports. Add it as a file source in the wizard. See Transformation file section for more details.

Configuration of a Database Endpoint

To enter the configuration for a Database Endpoint click Create New Endpoint and fill the required parameters:

  • Database type: Select the database system, for example PostgreSQL, MySQL, or Oracle.
  • Name: A descriptive label for the connection.
  • Host: The database server host name.
  • Port: The database server port.
  • Database name: The name of the database to connect to.
  • Username (optional): The database user name.
  • Password (optional): The password for the database user.

Use the Test Connection button to verify that the database can be reached before saving.

Using Veeva Vault

Choose Veeva RIM to import data from a Veeva Vault instance. You can select an existing Veeva endpoint or create a new one and test the connection before adding it as a source.

A transformation file (.rqg) is required for Veeva imports. Add it as a file source in the wizard. The transformation file uses VQL (Veeva Query Language) to select data from Vault. See Transformation file section for more details.

Configuration of a Veeva Endpoint

To enter the configuration for a Veeva Endpoint click Create New Endpoint and fill the required parameters:

  • Name: A descriptive label for the connection. This name is used to reference the endpoint in the transformation file (.rqg), so it must match the endpoint name used in the accuridsVeeva: prefix (e.g., if the name is myVault, the RQG file references it as accuridsVeeva:myVault).
  • Host: The base URL of the Veeva Vault instance.
  • API Version: The Veeva Vault REST API version in the format XX.X (e.g., 24.1). See the Veeva Vault API documentation for available versions.
  • Username: The Veeva Vault user name.
  • Password: The password for the Veeva Vault user.
  • Visibility: Controls who can see the endpoint. Choose Private (visible only to you) or Internal (visible to all registered users). Defaults to Private.

Use the Test Connection button to verify that the connection to the Veeva Vault instance is working correctly before saving.

Using API Connectors

If API connectors are configured in your Accurids instance, choose API Connector to import data through one or more available connectors. Each selected connector is added as a separate source.

Updating the Content of a Dataset

In Dataset details view, in the Sources section, click the pencil icon to open the wizard to update the content of the dataset.

The update wizard also has two steps:

  1. Add Sources: Add the new sources that should be loaded into the dataset.
  2. Review & Update: Choose how the new sources should be applied.

In the second step, choose one of the following options:

  • Add to existing data: Keeps the current dataset content and adds the new sources.
  • Replace existing dataset data: Clears the current dataset content and loads only the newly selected sources.

For datasets that use an ID Generator, replacing the existing content is not available. In that case, new sources can only be added to the existing data.

Finally, click Update sources to start the update. File uploads may continue in the background while the wizard waits for them to finish. You can monitor the loading and indexing progress in the Datasets view or in the dataset details view.

If you want to update the name, tag, description, or color of the dataset, you can do that directly in the Dataset details view by simply clicking the pencil icon

Controlling Permissions for Datasets

Accurids allows administrators and dataset owners to configure access and control over datasets. Permissions are managed through a dedicated Permissions section in the dataset details view, where users can define who can view, download, approve, and publish changes.

To facilitate access management, Accurids allows assigning access permissions to a group of users instead of managing permissions for each user individually. An administrator can manage user groups. See the corresponding section in the admin guide for further details.

Managing Dataset Visibility and Access

In the Permissions section, you can configure dataset visibility and access through the following options:

1. Data Can Be Viewed By
  • A dataset’s visibility can be Public, Internal, or Private.
  • Public: The dataset is visible to everyone, including anonymous (unauthenticated) users indicated by the label "Everyone, including anonymous users".
    • This option is only available if enabled by an administrator.
  • Internal: The dataset is visible to all registered users and groups indicated by the label "All registered users".
  • Private: Only explicitly assigned users or groups can access it. By default, only the dataset owner has access.
  • To change who can view the dataset, click the pencil icon next to "Data can be viewed by" to open Edit viewers. In the Visibility dropdown, choose Public, Internal, or Private, then save.
  • Users can modify view permissions by selecting "Edit user or group" and assigning users or groups accordingly.
  • Note: Administrators can view every dataset regardless of its visibility settings.
2. Data Can Be Downloaded By
  • This setting controls who can download dataset content.
  • This setting can be:
  • Internal: The dataset can be downloaded by all registered users indicated by the label "All registered users".
  • Private: Only explicitly assigned users or groups can download the dataset. By default, only the dataset owner has download access.
  • Users can modify download permissions by selecting "Edit user or group" and assigning users or groups accordingly.
  • Note: Administrators can download every dataset regardless of its settings.
3. Data Can Be Edited By
  • This setting allows dataset owners and administrators to assign write permissions to specific users or user groups, controlling who can modify the dataset content.
  • This setting can be:
  • Internal: The dataset can be edited by all users with Contributor or Admin role indicated by the label "All users with the Contributor or Admin role".
  • Private: Only explicitly assigned users or groups can edit the dataset. By default, only the dataset owner has edit access.
  • Write permissions complement existing visibility settings, enabling distinct configurations where a dataset can be visible to some users while restricting modification rights to others.
  • Write permissions can be granted to multiple users or groups simultaneously.
  • Users with write permissions can modify dataset content, such as making edits or adding entities, while their changes remain subject to approval workflows.
  • Users can modify write permissions by selecting "Edit user or group" and assigning users or groups accordingly.
  • Note: Write permissions can only be configured by dataset owners or administrators, and are applied at the dataset level. Permission changes are logged for traceability.

Controlling Dataset Lifecycle Permissions

When a user edits a dataset, the changes go through a lifecycle before they are applied (see the section about pending changes for details). Permissions related to dataset changes are managed in the Permissions section:

3. Changes Can Be Approved By
  • Defines who is authorized to approve submitted changes, moving them from the "submitted" stage to the "approved" stage.
  • Users can assign specific users or groups to this role.
  • Note: Administrators can approve any dataset changes regardless of assigned approvers.
4. Changes Can Be Published By
  • Defines who is authorized to publish approved changes to the dataset.
  • Users can assign specific users or groups to this role.
  • Note: Administrators can publish any dataset changes regardless of assigned publishers.
Enforcing the "Four-Eyes Principle"
  • Accurids supports an additional security measure where:
  • If "Approver must be different than submitter for each change" is enabled, a user cannot approve changes they submitted, even if they are listed as an approver.
  • If "Publisher must be different than approver for each change" is enabled, the user who approved a change cannot publish it.

Updating Dataset Ownership

Accurids allows the ownership of a dataset to be transferred from one user to another. This is particularly useful when a dataset needs to be managed by a different user, such as when responsibilities change or when a project transitions to a new team member.

To update the ownership of a dataset, follow these steps:

  1. Open the Dataset Details View: Navigate to the Datasets overview and select the dataset whose ownership you wish to change. This will open the Dataset details view.
  2. Change the Owner: In the upper right corner of the Dataset details view, you will see an icon labeled Change owner. Click on this icon to initiate the ownership change process.
  3. Select a New Owner: A dropdown menu will appear, displaying a list of users who can be selected as the new owner. You can scroll through the list or start typing a username to refine the search results. Once you have found the appropriate user, click on their name to select them and confirm the change by clicking the Save button. The ownership of the dataset will be updated immediately.
  4. Verification: The new owner's name will now be displayed under the Owner field in the Dataset details view, indicating that the transfer was successful.

Note: Only the current owner or an administrator can transfer ownership of a dataset.

Download a Dataset

In the Dataset details view, click to download the dataset. Depending on the file size this may take some minutes.

Remove a Dataset

To remove a dataset, go to the Dataset details view and click the trashcan icon , then confirm the deletion. This process cannot be undone.

Search for a Dataset

The Search field in the upper right corner of the Datasets overview allows you to search for a particular dataset within all loaded datasets.

Dataset Requirements

To be successfully loaded, indexed, and displayed a dataset has to fulfill the following requirements:

  • RDF Syntax: The dataset must be valid RDF syntax according to the different serializations such as Turtle, N3, or RDF/XML.
  • RDF Type: All entities which should be indexed need to have a specified rdf:type property.

Transformation file

A transformation file is required to map the data format into RDF triples for uploading relational databases or CSV or JSON data sources. The transformation file has an extension .rqg. The library used for the transformation is sparql-generate. A more advanced and further example can be seen on their website.

Example with Relational Database

Assume we already Configured a database endpoint called dbConn. And in this database, we have a table called user that we want to map into triples and upload.

The table looks like below:

id email dob first_name last_name
1 first.user@example.com 1990-01-01 First User
2 second.user@example.com 1991-02-03 Second User
3 third.user@example.com 1991-05-08 Third User

The transformation file (.rqg):

PREFIX accuridsIterator: <https://accurids.com/iterator/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>

BASE <http://example.org/>

GENERATE {
<id/{ xsd:string(?id) }> a <User>;
    <email> ?email .
}

ITERATOR accuridsIterator:SQL(<https://accurids.com/databaseEndpoint/dbConn>, "select id, email from user") AS ?id ?email

The generated triples:

http://example.org/id/1 @rdf:type http://example.org/User
http://example.org/id/1 @http://example.org/email "first.user@example.com"
http://example.org/id/2 @rdf:type http://example.org/User
http://example.org/id/2 @http://example.org/email "second.user@example.com"
http://example.org/id/3 @rdf:type http://example.org/User
http://example.org/id/3 @http://example.org/email "third.user@example.com"

The iterator for loading relational database is https://accurids.com/iterator/SQL. For referring to the database endpoint connection that has already been created is also using URI with the prefix https://accurids.com/databaseEndpoint/ and their name (in this example, dbConn).

CAVEAT: An URI must be a string. Hence, this xsd:string conversion is needed as the original data type of id is an integer.

Example with CSV

Assume we have a CSV file with the name persons.csv like this:

PersonId,Name,Phone,Email,Birthdate,Height,Weight
1,Jin Lott,374-5365,nonummy@nonsollicitudina.net,1990-10-23T09:39:36+01:00,166.58961852476,72.523064012179
2,Ulric Obrien,1-772-516-9633,non.arcu@velit.co.uk,1961-11-18T02:18:23+01:00,164.38438947455,68.907470544061
3,Travis Wilkerson,240-1629,felis@Duisac.co.uk,1956-03-05T15:57:29+01:00,163.47434097479,64.217840002146

The transformation file (.rqg):

PREFIX iter: <http://w3id.org/sparql-generate/iter/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX schema: <http://schema.org/>

BASE <http://example.org/>

GENERATE {
 ?personIRI a foaf:Person ;
            foaf:name ?name;
            foaf:mbox ?email ;
            foaf:phone ?phone ;
            schema:birthDate ?birthdate ;
            schema:height ?height ;
            schema:weight ?weight .
}
SOURCE <persons.csv> AS ?persons
ITERATOR iter:CSV(?persons) AS ?personId ?name ?phoneStr ?emailStr ?birthdateStr ?heightStr ?weightStr
WHERE {
    BIND( URI( CONCAT( "http://example.com/person/", ?personId ) ) AS ?personIRI )
    BIND( URI( CONCAT( "tel:", ?phoneStr ) ) AS ?phone )
    BIND( URI( CONCAT( "mailto:", ?emailStr ) ) AS ?email )
    BIND( xsd:dateTime( ?birthdateStr ) AS ?birthdate )
    BIND( xsd:decimal( ?heightStr ) AS ?height )
    BIND( xsd:decimal( ?weightStr ) AS ?weight )
}

The generated triples:

http://example.com/person/1 @rdf:type http://xmlns.com/foaf/0.1/Person
http://example.com/person/1 @http://xmlns.com/foaf/0.1/name "Jin Lott"
http://example.com/person/1 @http://xmlns.com/foaf/0.1/mbox mailto:nonummy@nonsollicitudina.net
http://example.com/person/1 @http://xmlns.com/foaf/0.1/phone tel:374-5365
http://example.com/person/1 @http://schema.org/birthDate "1990-10-23T09:39:36+01:00"^^http://www.w3.org/2001/XMLSchema#dateTime
http://example.com/person/1 @http://schema.org/height "166.58961852476"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/1 @http://schema.org/weight "72.523064012179"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/2 @rdf:type http://xmlns.com/foaf/0.1/Person
http://example.com/person/2 @http://xmlns.com/foaf/0.1/name "Ulric Obrien"
http://example.com/person/2 @http://xmlns.com/foaf/0.1/mbox mailto:non.arcu@velit.co.uk
http://example.com/person/2 @http://xmlns.com/foaf/0.1/phone tel:1-772-516-9633
http://example.com/person/2 @http://schema.org/birthDate "1961-11-18T02:18:23+01:00"^^http://www.w3.org/2001/XMLSchema#dateTime
http://example.com/person/2 @http://schema.org/height "164.38438947455"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/2 @http://schema.org/weight "68.907470544061"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/3 @rdf:type http://xmlns.com/foaf/0.1/Person
http://example.com/person/3 @http://xmlns.com/foaf/0.1/name "Travis Wilkerson"
http://example.com/person/3 @http://xmlns.com/foaf/0.1/mbox mailto:felis@Duisac.co.uk
http://example.com/person/3 @http://xmlns.com/foaf/0.1/phone tel:240-1629
http://example.com/person/3 @http://schema.org/birthDate "1956-03-05T15:57:29+01:00"^^http://www.w3.org/2001/XMLSchema#dateTime
http://example.com/person/3 @http://schema.org/height "163.47434097479"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/3 @http://schema.org/weight "64.217840002146"^^http://www.w3.org/2001/XMLSchema#decimal

The iterator for loading the CSV is http://w3id.org/sparql-generate/iter/CSV.

Example with JSON

Assume we have a JSON file name persons.json like this:

[
  {
    "PersonId": 1,
    "Name": "Jin Lott",
    "Phone": "374-5365",
    "Email": "nonummy@nonsollicitudina.net",
    "Birthdate": "1990-10-23T09:39:36+01:00",
    "Height": 166.58961852476,
    "Weight": 72.523064012179
  },
  {
    "PersonId": 2,
    "Name": "Ulric Obrien",
    "Phone": "1-772-516-9633",
    "Email": "non.arcu@velit.co.uk",
    "Birthdate": "1961-11-18T02:18:23+01:00",
    "Height": 164.38438947455,
    "Weight": 68.907470544061
  },
  {
    "PersonId": 3,
    "Name": "Travis Wilkerson",
    "Phone": "240-1629",
    "Email": "felis@Duisac.co.uk",
    "Birthdate": "1956-03-05T15:57:29+01:00",
    "Height": 163.47434097479,
    "Weight": 64.217840002146
  }
]

The transformation file (.rqg):

PREFIX iter: <http://w3id.org/sparql-generate/iter/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX schema: <http://schema.org/>

BASE <http://example.org/>

GENERATE {
 ?personIRI a foaf:Person ;
            foaf:name ?name;
            foaf:mbox ?email ;
            foaf:phone ?phone ;
            schema:birthDate ?birthdate ;
            schema:height ?height ;
            schema:weight ?weight .
}
SOURCE <persons.json> AS ?persons
ITERATOR iter:JSONSurfer(?persons, "$[*]",
    "$.PersonId",
    "$.Name",
    "$.Phone",
    "$.Email",
    "$.Birthdate",
    "$.Height",
    "$.Weight"
) AS ?I1 ?personId ?name ?phoneStr ?emailStr ?birthdateStr ?heightStr ?weightStr
WHERE {
    BIND( URI( CONCAT( "http://example.com/person/", xsd:string(?personId) ) ) AS ?personIRI )
    BIND( URI( CONCAT( "tel:", ?phoneStr ) ) AS ?phone )
    BIND( URI( CONCAT( "mailto:", ?emailStr ) ) AS ?email )
    BIND( xsd:dateTime( ?birthdateStr ) AS ?birthdate )
    BIND( xsd:decimal( ?heightStr ) AS ?height )
    BIND( xsd:decimal( ?weightStr ) AS ?weight )
}

The generated triples:

http://example.com/person/1 @rdf:type http://xmlns.com/foaf/0.1/Person
http://example.com/person/1 @http://xmlns.com/foaf/0.1/name "Jin Lott"
http://example.com/person/1 @http://xmlns.com/foaf/0.1/mbox mailto:nonummy@nonsollicitudina.net
http://example.com/person/1 @http://xmlns.com/foaf/0.1/phone tel:374-5365
http://example.com/person/1 @http://schema.org/birthDate "1990-10-23T09:39:36+01:00"^^http://www.w3.org/2001/XMLSchema#dateTime
http://example.com/person/1 @http://schema.org/height "166.58961852476"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/1 @http://schema.org/weight "72.523064012179"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/2 @rdf:type http://xmlns.com/foaf/0.1/Person
http://example.com/person/2 @http://xmlns.com/foaf/0.1/name "Ulric Obrien"
http://example.com/person/2 @http://xmlns.com/foaf/0.1/mbox mailto:non.arcu@velit.co.uk
http://example.com/person/2 @http://xmlns.com/foaf/0.1/phone tel:1-772-516-9633
http://example.com/person/2 @http://schema.org/birthDate "1961-11-18T02:18:23+01:00"^^http://www.w3.org/2001/XMLSchema#dateTime
http://example.com/person/2 @http://schema.org/height "164.38438947455"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/2 @http://schema.org/weight "68.907470544061"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/3 @rdf:type http://xmlns.com/foaf/0.1/Person
http://example.com/person/3 @http://xmlns.com/foaf/0.1/name "Travis Wilkerson"
http://example.com/person/3 @http://xmlns.com/foaf/0.1/mbox mailto:felis@Duisac.co.uk
http://example.com/person/3 @http://xmlns.com/foaf/0.1/phone tel:240-1629
http://example.com/person/3 @http://schema.org/birthDate "1956-03-05T15:57:29+01:00"^^http://www.w3.org/2001/XMLSchema#dateTime
http://example.com/person/3 @http://schema.org/height "163.47434097479"^^http://www.w3.org/2001/XMLSchema#decimal
http://example.com/person/3 @http://schema.org/weight "64.217840002146"^^http://www.w3.org/2001/XMLSchema#decimal

The iterator for loading the JSON is http://w3id.org/sparql-generate/iter/JSONSurfer.

Example with Veeva Vault

Assume we already Configured a Veeva endpoint called myVault. The Veeva Vault contains a products object with fields id and name__v that we want to map into triples and upload.

The transformation file (.rqg):

PREFIX accuridsVeeva: <https://accurids.com/veevaEndpoint/>
PREFIX accuridsIterator: <https://accurids.com/iterator/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>

BASE <http://example.org/>

GENERATE {
<id/{ xsd:string(?id) }> a <Product>;
    <name> ?name .
}

ITERATOR accuridsIterator:Veeva(accuridsVeeva:myVault, "SELECT id, name__v FROM products") AS ?id ?name

The generated triples:

http://example.org/id/1 @rdf:type http://example.org/Product
http://example.org/id/1 @http://example.org/name "Product A"
http://example.org/id/2 @rdf:type http://example.org/Product
http://example.org/id/2 @http://example.org/name "Product B"

The iterator for loading data from Veeva Vault is https://accurids.com/iterator/Veeva. It takes two arguments: a reference to the Veeva endpoint connection using the URI prefix https://accurids.com/veevaEndpoint/ followed by the endpoint name (in this example, myVault), and a VQL query string. VQL (Veeva Query Language) is similar to SQL and is used to select data from Veeva Vault objects.