FAQs

General

The Block Protocol provides a specification for the interaction between web blocks and applications using them: how data structures are typed and passed around, and what data operations are available to blocks.

Its ultimate goal in doing so is to enable any blocks to be usable by any web application without any configuration, once both are compliant with the Protocol.

More information on the motivation can be found in the specification overview.

The Block Protocol’s focus is on defining the interface between web blocks and applications using them.

It does not specify what happens to data once it crosses that boundary - it only seeks to standardise application-block interaction.

Existing web protocols do not define standardized interfaces between blocks and applications that might embed them.

Efforts exist to do this for particular ecosystem (e.g. Microsoft Loop Components, and application-specific plugin frameworks) - but none which aim to improve the web as a whole. The Block Protocol is a generic contract which can be implemented by any web application.

Various frontend libraries and technologies provide means of implementing encapsulated blocks (e.g. React, Web Components), but they do not standardize the interface between those blocks and the applications using them (e.g. the operations available to them).

Our focus is on supporting the web first and foremost, as this is the context we expect the Block Protocol to be most useful in, but we believe it will be portable to other environments.

Since version 0.2 of the Block Protocol, messages are passed between blocks and applications by dispatching and listening for events, which is a pattern which can be translated to many non-web contexts. While some implementation details will vary across contexts, the general principles of the Block Protocol in defining a contract between applications and embedded blocks are widely applicable.

In the future we intend to support non-web contexts more directly.

Blocks built for the Block Protocol work inside of multiple embedding applications, with promised support in the future of even more. Creating a Þ block enables a developer to code a block once and then use that same block in a variety of places, without having to make any application-specific affordances to ensure their block functions correctly or looks good.

Anybody with web development experience can build a block to add functionality to an application that implements the Block Protocol. That means if you're the user of a protocol-supporting application such as WordPress or HASH, developing a Þ block will enable you to extend that app's capabilities, and take your work with you to other Þ environments should you wish.

Blocks don't have to be open-sourced, but we strongly encourage publishing them to the Þ Hub, sharing them with others. You may be motivated to do this by any of the factors that already result in thiriving open-source software ecosystems today: challenge, competition, fun, reputation, and the desire to contribute to and be part of a community.

Open-sourcing blocks can also be borne of self-interest. Making your code public allows other interested users to contribute back to improving your block further, enhancing its utility and fixing any bugs that may be hiding within. Stephen Walli writes that open-sourcing code isn't always "contributing back out of altruism", but sometimes "engineering economics". It can be the right commercial call, hardening and improving the product (in this case block) that you yourself are relying upon.

Implementing the Block Protocol means that an embedding application can easily add new functionality, increasing its value to users.

Once they are Block Protocol-compliant, applications can easily search for blocks which can visualize or edit any type of data, and add them without any further configuration.

Blocks additionally make it easy to include structured data markup (JSON-LD) on pages, improving websites' search engine visibility and rankings.

The Block Protocol has been open-sourced by HASH, whose mission is to help everybody make the right decisions. Part of how we'll make a dent in this problem is by clearly separating display, visualization and interaction logic from underlying data, while making it easier to bring data together in a form that can be understood by both humans and machines (e.g. other applications).

HASH started life as a simulation platform, offering insights from modeling the world to anyone with a browser. But this is just one way to get insight from data, and simulation models can be greatly enhanced when built atop the sort of timely, structured information that the Block Protocol makes accessible.

The Block Protocol advances HASH’s mission by encouraging both data itself and the tools that edit and visualize data to be structured and portable. This makes it easier to work with more data in more places, and unlocks more functionality for more users, increasing the ability for people to understand and learn from the world.

The Block Protocol itself is completely open-source, and is dually available under both the MIT License and Apache 2.0 License, at your option. This means that you're free to use any of the base code however you like, in line with those licenses, for example by implementing support for the Block Protocol in your own application.

Specific implementations of the Block Protocol in other applications may be kept private, or released under a license of the publisher's choosing (e.g. both the WordPress and Figma plugins use the Elastic License).

Block Implementation

Blocks can be published for free to the Þ Hub. Publishing a block to the Þ Hub makes it discoverable to users browsing blockprotocol.org, as well as embedding applications consuming the Þ API.

Through the Þ API, embedders can access the metadata and source code of blocks, listing blocks within their own application and serving them to users.

Entity data can be hosted in any application which implements the Block Protocol (e.g. HASH). If you are developing an embedding application you would like listed here, please get in touch.

Yes. Blocks can fetch their own data from external modules. For example, a map block might fetch data from a mapping module.

We believe the best blocks will communicate data back and forth with the embedding application, making use of the operations defined in the Graph Module specification.

For example: a mapping block might persist a user’s choices about map positioning or styling back to an embedding application, all without the application knowing anything about the block, or the block even being aware of which application it is being used within.

The Block Protocol enables blocks to store and retrieve data in an application without awareness of how exactly the data is stored.

Blocks can have their own local state - they might use this to allow users to explore data or draft changes, without saving anything.

To save data beyond the session, blocks should make use of the operations defined in the spec to send updates to the embedding application.

Blocks can be published with any kind of license, similar to code on GitHub, or libraries in a package manager like npm.

We strongly encourage block publishers to be explicit about the permissions (if any) they grant to others when publishing blocks to the Þ Hub, and recommend using open-source software licenses when publishing blocks, rather than relying on Creative Commons licenses. More information about applying software licenses to blocks and types can be found on the HASH glossary licensing page.

Similar to most programming language package managers, because anybody can publish executable code (in the form of a block), it's important you trust the block you're executing.

Because those inserting blocks are often non-technical end users, this isn't always practical. As such, we recommend that embedders implement both sandboxing within their environments, and allow/deny lists.

  • Sandboxing will be covered in depth in our guide for embedders.
  • Allow/deny lists can be used to resrict or control:
    • which blocks users are able to discover and insert within your environment
    • which blocks a user inserts are ultimately able to be run in either a sandboxed or unsandboxed fashion

Today you might choose to allow/deny blocks on the basis of their authorship (published namespace), or based on their verification status.

In the future we also plan to expose:

  • additional Þ Hub metadata which may prove useful in the creating and maintaining whitelists;
  • permission grants for blocks and block types, that allow for users to control the level and types of access blocks have (e.g. the ability to access information created through the Graph Service outside of themselves) or access to other services.

Example - HASH: HASH allows first-party blocks (i.e. those developed by HASH and listed under their own @hash namespace on the Þ Hub), alongside third-party blocks that have been verified, to be run in an unsandboxed fashion within their application, because they trust the origins and quality of those blocks. By unsandboxing blocks HASH can deliver them more efficiently to users, while providing a richer user experience. The blocks in HASH's namespace are amongst the most used within their application, providing default basic text editing functionality, and so the ability to unsandbox these and run them optimally is considered valuable. HASH allows all other blocks on the Þ Hub to be discovered and inserted by its users, but delivers these in a sandbox.

Example - WordPress: The Block Protocol for WordPress plugin restricts the insertion of blocks to verified blocks only. This can be turned off by WordPress administrations from the plugin settings panel, but prevents non-technical authors and editors from accidentally relying on blocks which have not been subjected to review. In the future the plugin will support sandboxing and more variable permissions, in addition.

Blocks submitted to the Þ Hub are immediately available for discovery and use.

However, when blocks are published or updated, they are also potentially eligible to be added to a review queue.

Blocks must include a 'repository' and 'commit' field in their metadata to be eligible for review. These must correspond to the source code the block was built from.

We periodically review all blocks in the queue, checking for basic indicators (and contraindicators) of a block's quality, security and integrity. While this check does not represent a fully comprehensive audit of a block, the process provides a baseline level of assurance regarding a block's fitness for purpose, and all reviewed blocks which pass this review process receive the 'verified' badge.

We believe blocks should provide neutral, minimal styling of their own, and leave it to embedding applications to provide additional styling - this might be in the form of an entire stylesheet for blocks to load, or style variables which can be selectively applied.

There is ongoing discussion on this topic - we welcome your views.

Rich text editing is a core part of many modern block-based applications, but details often vary across applications:

  • how rich text is represented in existing applications varies depending on the particular approach used
  • applications often provide special functionality inside rich text fields, e.g. being able to @mention or search for things inline

We do not believe it feasible or desirable to impose a single rich text editing experience across applications, and instead have introduced the Hook Module to allow embedding applications to inject their own rich text editing display and input, at blocks' request.

Blocks composed of other blocks is an important feature which we believe the Block Protocol must support, and which we have begun exploring.

A simple example is a ListBlock which is composed of a list of other blocks, and for which it has placeholders within it.

Blocks which are made up of other blocks may be referred to as “compound blocks”. We will be publishing example compound blocks soon.

Web Components can describe the events they dispatch programmatically, but each one can be different. This means that you often need to know the details of how of a specific element operates in order to implement it within an application.

The goal of the Block Protocol is to allow new blocks to be added to applications without any case-specific configuration or requirement that either the embedder or block know about each other's existence ahead of time.

To do this, the Block Protocol standardizes how data requests are made between a block and an embedding application.

Web Components (or custom elements) are a popular way of implementing Block Protocol blocks.

GraphQL provides a “syntax and system for describing [application] data requirements and interactions”, whereas the Block Protocol is specifying a particular set of interactions: those between a block and any application embedding it.

It would be possible to define the operations specified in the Block Protocol in GraphQL - e.g. createEntity - or to extend the GraphQL spec to include them.

We do not yet believe it necessary to specify block-application requests in GraphQL syntax nor require that they be executed by the embedding application according to the GraphQL specification, although we are open to the idea. It may become more attractive as operations evolve to include more features already covered by GraphQL (e.g. subscriptions, selection sets).

Endpoints and APIs

Yes, blocks can make requests to external APIs. They can do this in one of two ways:

  • directly: all of the code require to access an API can be contained within a block. This is not recommended in circumstances where a service requires an API key in order to access it, as it will be impossible to keep this key secret if a block (and for a block to remain usable) once the block is published to the Þ Hub.
  • through the Þ Hub: in addition to blocks and types, endpoints (APIs) can also be made accessible via the Þ Hub. Currently available endpoints include OpenAI and Mapbox. All Block Protocol users receive free credits for use with these services, and receive discounted access through Þ Hobby and Þ Pro subscriptions. Individual embedding applications can additionally override the Block Protocol API's handling of any available service, and choose to resolve these themselves, falling back to the Block Protocol's support for services or ignoring them entirely. Read more in the Service module specification.

Facilitating external API access via the Þ Hub is a new addition to the protocol, and intructions for making additional endpoints available will be added in due course. In the meantime, we're able to manually add endpoints on behalf of block developers. If there's an endpoint you'd like to use within a new or existing block, please contact us.

The ability for Þ blocks to access external services while (optionally) authenticated as an individual user is a capability which we intend to introduce in the coming months. This will enable blocks to be developed that allow for a much wider range of use-cases to be addressed.

Example: a block user has information synced to Coda or AirTable from multiple external locations. They want to display this data on their website, which uses WordPress. With the Block Protocol for WordPress plugin, a user will in the future be able to use a Coda or AirTable block which allows them to select information contained within their external account, and insert this information onto their page or post in WordPress, which can then automatically keep itself in sync with the original source. This allows publishing from a wide variety of external sources.

Paid external APIs can be accessed via the Block Protocol middleware, including from within Þ blocks.

In applications that ask users to insert their own Þ API keys: some Þ embedding applications ask users to input their own Þ API key. All Þ users receive a free usage allowance for most services on the Þ Hub, and Þ account holders can attach a payment method to unlock additional access. Embedding applications such as WordPress leverage this by passing an application user's Þ API key along with their request for blocks/services.

In applications that use their own Þ API key, and abstract the protocol away from users: there are multiple ways in which Þ embedding applications can provide access to paid external services, including via the Þ middleware, without requiring users to input their own API keys. For example:

  • self-handling requests: bilaterally integrating with an external service and intercepting the request before handling by the Þ middleware
  • attributing requests to users: in a future update, we will support programmatic generation of API keys (e.g. one per user, requested by the embedding application), associated with a single Þ account. Additionally, we intend to support affixing arbitrary metadata (including user IDs) to Þ API calls, which may provide an alternative means of attributing service usage to single application users, as well

These approaches all require handling rate-limiting, payments and service security internally, separately, which carries a higher development and maintenance burden, but unlocks support for billing models beyond simple "Pay As You Go".

Schemas, vocabularies and the semantic web

The goal of the Semantic Web is to make internet data machine-readable.

It involves making sure there is data on web pages which can be parsed by machines, in order to determine the entities described, and their links to other entities elsewhere on the web.

The Block Protocol requires that the data passed between blocks and applications is in the form of entities conforming to a defined structure: a schema.

This structure can then be used to include data on web pages which describes the entities in a machine-readable way.

While the Block Protocol doesn’t require this data to be exposed or made downloadable, we do encourage embedding applications to do this as much as possible.

For example, where a page contains a block displaying a movie, machine-readable data would also be included describing the various properties of the movie (e.g. releaseDate), as well as linking to other pages which describe entities linked to it - e.g. its director.

One way of representing entities in a machine-readable way is in JSON-LD which describes entities and their link to other entities). Here’s an example of such a representation, taken from the JSON-LD homepage:

{
  "@context": "https://json-ld.org/contexts/person.jsonld",
  "@id": "http://dbpedia.org/resource/John_Lennon",
  "name": "John Lennon",
  "born": "1940-10-09",
  "spouse": "http://dbpedia.org/resource/Cynthia_Lennon"
}

The structure of entities in the Block Protocol are described by JSON schema, and we encourage these to be mapped to terms which can be used to construct a JSON-LD representation of the entity, which embedding applications can include in the page's markup.

No, and for the most part the Block Protocol itself does not define the structure of entities passed between blocks and applications. Instead it specifies how entities should be defined, transmitted and updated.

The Block Protocol specifies that JSON Schema should be used to describe the properties of an entity and expected types of property values - not what those properties should be.

You can use existing schema.org schemas, or can create new schemas through the Block Protocol website, once logged in. The schema creator on the Block Protocol website aims to make it easier for users to define the different types of data structures that their blocks will work with.

We don’t prescribe canonical types for any thing, but do recommend that users link their schemas and properties to schema.org types and properties where possible, to help in making the pages that use their schemas machine-readable. We have included a way of doing so in our schema editor. Mapping different schemas and their properties to one another is a process known as ‘crosswalking’.

schema.org defines a collection of schemas for use in making the data on web pages machine-readable.

The Block Protocol does not define the structure of entities passed between blocks and applications - it only specifies how they are transmitted and updated.

Implementers of the Block Protocol could use the structure of entities as defined by schema.org, if they wished (e.g. a block could be built to render or edit a schema.org Event).

We recommend that users link their schemas and their properties to schema.org types and properties where possible, and have included a way of doing so in our schema editor.

There are two different types of ‘crosswalking’ we intend to support in the Block Protocol:

  1. Crosswalking from JSON schema properties to schema.org (and equivalent) properties, in order to make rendered pages machine-readable - as described above and supported in the schema editor. The purpose of this is to mark up web pages with structured data describing the entities within.

  2. Crosswalking from JSON schema properties to other JSON schema properties, in order for applications to understand that seemingly incompatible schemas may in fact be compatible, and to translate between them. For example, if one Table schema has a 'rows' property, and another Table schema has a 'records' property, declaring that the two are equivalent allows applications to translate data conforming to the first Table schema for use in places where data conforming to the second Table schema is expected. This mitigates the impact of different approaches being taken to describe the same data.

Crosswalking?

Mapping different schemas and their properties to one another is a process known as ‘crosswalking’.

It helps machines more easily understand how new schemas fit in to existing knowledge graphs and ontologies.

A semantic type system is central to our vision for the Block Protocol.

We don’t believe that a single set of schemas provided by any one standards organization (even Schema.org) can ever perfectly fit all use cases, and as such the ability to create new schemas is important. When doing this, we want to make them as accessible to machines (and people) as possible.

The Block Protocol website includes a schema editor that provides a convenient way to define new entity types used by or with blocks and their embedding applications. These are then hosted persistently and made accessible via the same content delivery network that serves blocks from the Þ Hub, guaranteeing their availability and discoverability.

Who can see my types/schemas?

All types created on the Block Protocol website are currently public. In the future we’ll support the creation of private types as well.

schema.org provides a great base ontology for defining lots of types of ‘things’ out there in the world.

  1. Many times, certain ‘properties’ or data won’t be relevant to your use case, resulting in bloated entity type definitions.
  2. On other occasions you’ll want to store information differently than how it’s set out in the Schema.org definition. For example, schema.org/Person defines a person as having a givenName and a familyName and yet in various cultures this isn’t guaranteed. HASH uses preferredName and legalName instead - in communications and billing contexts respectively. You can view the HASH ‘Person’ schema at https://blockprotocol.org/@hash/types/Person and see how it crosswalks with the canonical definition of a person provided by Schema.org. This is just one example of how custom schemas can be made to relate back to the core ontology provided by Schema.org.
  3. The Schema.org ontology is slow to change. This is by design, similar to how most standards organizations operate. And while it can move fast when clear universal impetus exists (evidenced by its excellent response to COVID) it ultimately represents a pseudo-centralized model of maintaining a schema registry. The Block Protocol provides the permissionless ability to build atop and extend Schema.org, unbounded by the processing constraints and imagination of any single working group or organization.
  4. Not everything is included in the Schema.org ontology. For example, although animal shelters and pet stores exist, animals and pets themselves are missing entirely. This arguably reflects the priorities of schema.org’s maintainers (notably corporates, and search engines at that).

Moving beyond a single set of constrained schemas that rarely change introduces a number of issues. However, we believe that all of these can be mitigated and that the benefits of an open ecosystem of well-described, discoverable, and end-user definable types greatly outweigh any potential drawbacks.

  1. Reusability of types: the type system supports a high degree of composability, with data types and property types indvidually definable and addressable from within different entity types and contexts.
  2. Convergence on types: the type editors developed by HASH as part of their ontology manager include functionality to encourage users — where possible — to utilize and extend, or otherwise crosswalk, to existing types. These best-practice editors are open-source and are free to use within any other embedding application that builds upon the Block Protocol. We will be replacing the schema designer found within the Hub with these new type editors shortly.
  3. Machine-resolvable relationships between types: we are planning to introduce an improved ability for types to declare themselves to be structurally or literally the same as other types, as well as conceptually the same.

JSON Schema has a rich vocabulary for validating data. We designed the Block Protocol with content management systems and collaborative workspaces in mind, and being able to define precise constraints for data is important for those applications.

We expect to have to add custom keywords to cover relationships which JSON Schema does not have in its core vocabulary (e.g. that a property is inverseOf another), but believe it will be easiest to start with JSON Schema and add keywords, rather than start with another vocabulary and add the required validation to it.

We encourage applications to include JSON-LD describing entities on their public pages to make them machine-readable.

In theory, the Block Protocol could use JSON-LD as the format in which entities are passed between applications and blocks. We did not pursue this because we believe it will be easier and more scalable to handle links between entities outside the JSON for the entity itself, as described here. We have also taken a different approach to identifying entities, which may not have a public URL and may require a combination of fields to identify.

Previous