In the event the organisation is managing the API, you shall want to manage the authorisation server.

Use application-level authorisation if you want to control which applications can access your API, but not which specific end users. It is suitable if you want to use rate limiting, auditing, or billing functionality. Application-level authorisation is typically not suited to APIs holding personal or data that are sensitive you really trust your consumers, for instance. another government department.

We advice using OAuth 2.0, the open authorisation framework (specifically using the Client Credentials grant type). This service gives each registered application an OAuth2 Bearer Token, which can be used to create API requests from the application’s own behalf.

To supply user-level authorisation

Use user-level authorisation if you want to control which end users can access your API. This can be ideal for working with personal or sensitive data.

As an example, OAuth 2.0 is a authorisation that is popular in government, specifically utilizing the Authorisation Code grant type. Use OAuth 2.0 Scopes for more access control that is granular.

OpenID Connect (OIDC), which builds on top of OAuth2, along with its use of JSON Web Token (JWT), may be suitable in many cases, for example a federated system.

For whitelisting and privacy

Use whitelisting if you’d like your API to be permanently or temporarily private, as an example to operate a beta that is private. You can whitelist per application or per user.

You ought not to whitelist the IP addresses associated with APIs you consume. This is because APIs might be provided using Content Delivery

Networks (CDNs) and scalable load balancers, which count on flexible, rapid allocation of IP addresses and sharing. Instead of whitelisting, an HTTPS should be used by you egress proxy.

choose a suitable frequency that is refresh expiry period for your user access tokens – failure to refresh access tokens regularly may cause vulnerabilities

let your users to do my homework revoke authority

invalidate an access token yourselves and force a reissue if there is a reason to suspect a token has been compromised.

use time-based passwords that are one-timeTOTP) for additional security on APIs with application-level authorisation

use multi-factor authentication (MFA) and identity verification (IV) for additional security on APIs with user-level authorisation

make sure the tokens you provide have the narrowest permissions possible (narrowing the permissions means there’s a far lower risk to your API if the tokens are lost by users or compromised)

Your API security is only as effective as your day-to-day security processes.

Monitor APIs for unusual behaviour just like you’d closely monitor any website. Seek out alterations in IP addresses or users using APIs at unusual times of your day. See the National Cyber Security Centre (NCSC) guidance to discover how to implement a monitoring strategy in addition to specifics of just how to monitor the security status of networks and systems.

All API naming in URLs (like the true name of one’s API, namespaces and resources) should:

use nouns instead of verbs

be short, simple and easy clearly understandable

be human-guessable, avoiding technical or specialist terms where possible

use hyphens instead of underscores as word separators for multiword names

As an example: api-name.api.gov.uk .

Generally, each of your APIs should have its domain that is own as each service possesses its own domain. This may also avoid API sprawl and simplify your versioning.

Across them, such as common management, authentication and security approaches, you may need to consider if you provide multiple APIs and you have a business case that means you’ll deploy common services:

providing them all from the domain that is same

differentiating them through the use of namespaces.

The function should be reflected by the namespace of government being offered by this API. Namespaces could be plural or singular, with regards to the situation.

Sub-resources must appear beneath the resource they relate to, but is going a maximum of three deep, as an example: /resource/id/sub-resource/id/sub-sub-resource .

If it is actually a combination of multiple first or second level resources if you reach a third level of granularity (sub-sub-resource), you should review your resource construction to see.

You should use path parameters to recognize a resource that is specific resources. As an example, /users/1 .

You should only allow query strings to be utilized in GET requests for filtering the values returned from an resource that is individual for example /users?state=active or /users?page=2 .

You should never use query strings in GET requests for identification purposes, as an example, avoid using the query string /users? >.

Query strings really should not be useful for defining the behaviour of one’s API, for example /users?action=getUser& >.

When iterating your API to include new or improved functionality, you need to minimise disruption for the users so that they usually do not incur unnecessary costs.

To minimise disruption for users, you really need to:

make backwards changes that are compatible possible – specify parsers ignore properties they don’t expect or understand to make sure changes are backwards compatible (this enables you to add fields to update functionality without requiring changes to the client application)

make a endpoint that is new for significant changes

provide notices for deprecated endpoints

New endpoints usually do not always need to accompany new functionality if they still maintain backward compatibility

You should consider when you need to make a backwards incompatible change:

incrementing a version number within the URL or even the HTTP header (begin with /v1/ and increment with whole numbers)

supporting both old and new endpoints in parallel for a suitable time period before discontinuing the old one

telling users of your API how to validate data, for example, inform them when a field is not going to show up so they can make certain their validation rules will treat that field as optional

Sometimes you’ll want to make a bigger change and simplify a complex object structure by folding data from multiple objects together. In cases like this, make a object that is new at a new endpoint, as an example:

Combine data about users and accounts from:

/v1/users/123 and /v1/accounts/123

Set clear API deprecation policies so you’re not supporting old client applications forever.

State how long users have to upgrade, and how you’ll notify them of the deadlines. For instance, at GDS, we usually contact developers directly but we also announce deprecation in HTTP responses using a ‘Warning’ header.

Your API consumers would want to test their application against your API before they go live. Then you do not necessarily need to provide a test service if you have a read only API.

Give them a test service (sometimes known as a sandbox).

If your API has complex or stateful behaviour, consider providing a test service that mimics the live service whenever you can, but bear in mind the expense of doing this.

In case the API requires authorisation, for instance using OAuth 2.0, need that is you’ll include this in your test service or provide multiple quantities of a test service.

To help you decide what to present, do user research – pose a question to your API consumers what a test that is sufficient would appear to be.

You really need to provide the ability to your development team to test your API using sample test data, if applicable. Testing your API should not involve production that is using and production data.

A well-configured Content Delivery Network (CDN) may provide sufficient scalability for highly cacheable open data access APIs.

For APIs that don’t have those characteristics, you should set quota expectations for the users with regards to capacity and rate available. Start small, according to user needs, and respond to requests to improve capacity by making sure your API can meet up with the quotas you’ve got set.

Make certain users can examine your API that is full up the quotas you’ve got set.

Enforce the quotas you have got set, even though you have got excess capacity. This will make sure that your users can get a consistent experience when you don’t have excess capacity, and can design and build to manage your API quota.

As with user-facing services, you ought to test the capability of your APIs in a representative environment to help to make sure it is possible to meet demand.

Where in actuality the API delivers personal or private information you, since the data controller, must definitely provide sufficient timeouts on any cached information in your delivery network.

Write a comment:

*

Your email address will not be published.