This specification of an Application Programming Interface (API) is designed to facilitate the exchange of data about building envelopes. Several databases use the same API specification to offer data about components. A metabase manages for example the identifiers of components and institutions which must be the same for all databases.
This API specification consists of GraphQL schemas to specify a GraphQL endpoint and JSON Schemas to specify the format of the responses. GraphQL schemas are in the directory ./apis. There is a visualization of the GraphQL schema of the metabase and of the database. Example GraphQL queries and mutations are in the directory ./requests. You can try the queries of the tutorial at the GraphQL endpoint of the metabase.
GraphQL queries deal with metadata about data sets. They are used to find suitable data sets. The response to a GraphQL query is in JSON. For example, optical.json defines the metadata about an optical data set. It can include a URL as locator to download the pure data.
The format BED-JSON is used for the pure data. solarTransmittanceReflectance.json is an example of a pure data set. It must be valid against the JSON Schema opticalData.json.
JSON schemas are in the directory ./schemas and realistic example JSON files in the directory ./examples. The directory ./tests/valid provides test JSON files that must be valid and the directory ./tests/invalid JSON files that must be invalid against the schema with the same name as the folder. There is a schematic drawing of an optical data point and more visualizations of optical examples. There is also a schematic drawing of a calorimetric data set and more visualizations of calorimetric examples.
The following introduction explains the structure for new users and the section "On your Linux machine" explains how you can work with the API specification.
I don't want to read this whole thing, I just have a question!
- With optical data as example
- In general for all domains
- Combining the GraphQL responses with the datasets
- How was the data created?
Implementation of the API specification
If you have a question, please read this README.md and search this repository with its wiki, discussions, Questions and Answers and existing issues for the answer.
If you don't find the answer there and if your question is related to the code, please raise a new issue and add the tag question.
There are many domains of data such as optical, calorimetric, geometric, hygrothermal, life cycle and more. This introduction begins with optical data to illustrate the structure.
solarTransmittanceReflectance.json is a simple example how the nearnormal-hemispherical solar transmittance and reflectance can be exchanged. infraredDataPointForDiagram.json is an example for infrared values. colorVisibleTransmittanceReflectance.json is an example for visible optical properties. All three are examples of optical properties integrated over a range of wavelengths.
spectrallyResolvedDataPointsForDiagram.json is an example of spectrally resolved optical data. To keep it simple, it includes only the values for three wavelenghts. igsdbExampleClearlite-4_250903.json is a realistic optical data set which includes spectrally-resolved data as well as integral data.
These optical datasets must all be valid against the JSON Schema opticalData.json. opticalData.json defines each key and is therefore the best source to learn more about the details of optical datasets.
In order to find such optical data sets, it's best to start with the GraphQL endpoint https://www.buildingenvelopedata.org/graphql/ as the entrance to the product data network. You can enter there for example the following query:
query {
databases {
edges {
node {
name
allOpticalData(first: 3) {
edges {
node {
componentId
resourceTree {
root {
value {
locator
}
}
}
}
}
totalCount
pageInfo {
endCursor
hasNextPage
hasPreviousPage
startCursor
}
}
}
}
}
}
The field locator returns a URL which you can use to download the optical data set. The query is for all optical data of the product data network. You can use pagination to download all optical datasets. This example shows you that first GraphQL is used to search for the data sets before JSON data sets are downloaded. You find more query examples in tutorial.graphql. Whenever you have questions regarding these queries, you find all details about them in the GraphQL specification.
For other domains hygrothermal, geometric, calorimetric and lifeCycle, it is helpful to understand the structure of this repository. For each domain, there is a JSON Schema like hygrothermalData.json, geometricData.json, ... in the folder ./schemas. The folder ./examples provides realistic examples. Each example must be valid against the JSON Schema with the name of the subfolder. For example, the JSON files of ./examples/calorimetricData must be valid against the JSON Schema calorimetricData.json. The JSON Schema contains all information which you need to understand the content of the JSON files.
The folder ./tests/valid contains JSON files which test specifically what their name indicates. Each test must be valid against the JSON Schema with the name of the subfolder. For example, the JSON files of ./tests/valid/hygrothermalData must be valid against the JSON Schema hygrothermalData.json. The folder ./tests/invalid contains JSON files which must be invalid against the JSON Schema with the name of the subfolder.
The tests and examples help to understand the exchange of product data. They are also important for the development of this specification to ensure that a new feature does not compromise the existing features.
Usually, GraphQL is used to to query for datasets and then the dataset are downloaded as a JSON file. However, the response of a GraphQL query is valid JSON. Therefore, the GraphQL responses can be combined with the datasets to one large JSON file.
For example, semitransparentBuildingIntegratedPhotovoltaicThermal.json must be valid against the JSON Schema component.json. It contains the central identifier of the component which is create when the component is registered in the metabase buildingenvelopedata.org. Each dataset of each domain has its own decentral identifier which is managed by the product database(s) which contain the datasets.
It is possible to define the method which was used to create a dataset. The methods must be registered in the metabase buildingenvelopedata.org . Each method receives its unique central identifier.
The test integralAccordingToStandard.json is an example of an optical dataset which was created according to a standard. The dataset which was used as the source of the calculation is defined in the lines 25-31. It would be possible to define arguments of the calculation method.
With you web browser, you can search our wiki, the issues and pull requests and contribute to them.
In order to browse the code conveniently with Codespaces, open building-envelope-data/api in your favorite web browser, click on the button "Code" in the top-right corner, select the tab "Codespaces", and on the first usage click on + to create a new codespace and on subsequent usages click on the name of an existing codespace.
If you are developing this repository further, you can follow the description With Docker. For example, you can test and format your contributions with
cp ./.env.sample ./.env
make shell
make compile
make examples
make test
make format
In order to use our development tooling, for example, to format code and to run tests, follow the instructions below.
- Open your
favorite shell,
for example, good old
Bourne Again SHell, aka,
bash, the somewhat newer Z shell, aka,zsh, or shiny newfish. - Install Git by running
on Debian-based distributions like Ubuntu, or
sudo apt install git-all
on Fedora and closely-related RPM-Package-Manager-based distributions like CentOS. For further information see Installing Git.sudo dnf install git
- Clone the source code by running
and navigate into the new directory
git clone git@github.com:building-envelope-data/api.git
building-envelope-databy runningcd building-envelope-data - Prepare your environment by running
and adjusting the copied environment to your needs.
cp ./.env.sample ./.env
- Install Docker Desktop, and GNU Make.
- List all GNU Make targets by running
The targets
make helpname,tag,build,remove,run,shell,remove-containers,remove-volumes, andservecan be used to interface with Docker. The other ones can be used withinbashinside a Docker container:compilevalidates the JSON schemas against the JSON Schema meta-schemas and the GraphQL schemas against the GrahpQL specification,testvalidates the tests against the schemas,examplesvalidates the examples against the schemas,formatformats source files,introspectintrospects the GraphQL schemas,dos2unixconverts Windows-style to UNIX-style line endings,install-toolsinstalls development tools from the lock file, andupdate-toolsupdates development tools to the latest compatible minor versions.
- Drop into
bashwith the working directory/app, which is mounted to the host's working directory, inside a fresh Docker container based on Debian Linux everything installed by runningIf necessary, the Docker image is (re)built automatically, which takes a while the first time.make shell
- Do something with the project like validating the schemas by running
make compile
- Drop out of the container by running
or pressing
exitCtrl-D.
- Install GNU Bash, GNU Make, and npm.
- Install the development tools in
package.jsonby runningwhich in particular installs the command-line interface for Another JSON Schema Validator (AJV), namelymake install-toolsajv-clias Node package to be executed throughnpx, for example,npx ajv --help - Drop into
bash. - Do something with the project as elaborated above.
Note that another POSIX-compatible shell than GNU Bash should also do. See also the POSIX specification and the POSIX FAQ.
Also note that GNU Make takes the shell from the variable SHELL or, if not
set, the program /bin/sh. See
Choosing the Shell
Our Code of Conduct is the guideline of our collaboration.
A database which implements this API specification is presented by https://github.com/building-envelope-data/database . A metadatabase wich implements this API specification is presented by https://www.buildingenvelopedata.org/ (front end) https://www.buildingenvelopedata.org/graphql/ (back end) and https://github.com/building-envelope-data/metabase (source code). The metadatabase manages for example the identifiers for components and institutions which must be the same for all databases. The databases manage the data sets of the components.
If you are interested to contribute by questions, reporting bugs or suggesting enhancements, please see CONTRIBUTING.md for further details.
