Cloud Application Software Development
Creating the next generation of high-performance data management tools.
Only the fastest possible solution is good enough for us. Making decisions faster than others is one easy way to get ahead of the competition.
High performance of centralized software solutions saves money directly if other serverless solutions depend on them because most serverless solutions are billed per time running – and waiting. This can easily mean 50% less costs for all consuming serverless applications, while also speeding up all dependant services at once.
Stability of software solutions is the key to operating complex software architectures without having to worry about system failures or data loss.
Our solutions are stress-tested under worst case conditions using heavy load scenarios. For every solution we develop load-testing tools and provide the source code for free, so you can test it on your own. These load-testing tools also measure the relevant performance metrics under specific loads and log everything directly to the console.
We involve customer feedback in the development process of the solutions to always ensure a customer-centric design.
The key goal is to make a solution as easy to use as possible. We keep the time to learn everything needed about a solution as short as possible, so there is no time wasted on building up solution specific knowledge. Most of the solutions offer simple RESTful APIs without any solution specific language to learn, but only a small set of request-schemas.
The design process should never start or end with what other solutions already provide.
For tackling new problems, we always choose a greenfield designing approach. Only if the disruption and innovation potential is high enough, we start with the development process. Every solution contains multiple innovations. We also try to minimize external dependencies and build nearly everything ourselves.
To ensure cost and resource usage efficiency, parallelization, memory pooling and vectorization must scale with the underlying infrastructure.
In most cases horizontal scaling is impractical for cloud-first solutions. Vertical scaling with the underlying hardware can be harder to implement but prevents cluster communication overhead. Public cloud providers offer single virtual machines with up to hundreds of cores and terabytes of RAM, so there is no need to scale horizontally anymore.
High availability of services is crucial for running critical solutions in near real-time scenarios.
Using cloud infrastructure has many advantages, but also some hidden pitfalls. The servers can be restarted when updates are deployed by the cloud providers. Every solution is optimized for fast start-up times and deadlock prevention under all circumstances. Whenever a running program is killed and restarted, it will always be available again in milliseconds.
Data Layer TS is a high-performance data storage service for equidistant numerical time series data. It can easily store tens of millions of time series with hundreds of millions of historical data points per series. Besides just storing data, the service can also be used as a cache and compute layer to extend an existing persistence solution. The main purpose of the service is sharing time series data through making it accessible for any service at any time as fast as possible. Specialized time series data aggregation functions like downsampling the temporal resolution additionally enrich the shared data. The simple HTTP API enables easy integration for any programming language or platform and there is no driver or client library needed to use it. All maintenance tasks are completely automatized and there is no index fragmentation or data compaction to worry about. Massive parallelization, vectorized aggregations, optimized file structures and an intelligent in-memory data management enable the service to reach an unmatched performance and efficiency. A single instance can process tens of thousands of dedicated HTTP requests per second and reaches sub-millisecond latencies. When working with data in batches, tens of millions of data points can be ingested or retrieved per second.