Integrating Grafana into a web app

Integrating Grafana into a web app

A little background, I have been working with BCI (brain computer interface) and multi channel EEG to monitor brain activity. When the user starts a session with the BCI cap on, the raw data streaming from each channel is stored into Influxdb.
I can visualize this data in real time which I successfully managed to achieve by integrating my database in Influxdb with Grafana as shown below for Channel 1

However, my final aim is to create a web app where a user can login and see their current streaming session in real time or any of the previous sessions. The problem with Grafana is, it is not easy to integrate/embed with an existing web app. I looked into Embed Panel but this enables me to add only a snapshot of the graph whereas I need it to be in real time where the data is continuously streamed to the chart.
Any help would be greatly appreciated and thanks in advance!

Solutions/Answers:

Solution 1:

Grafana does not have a “javascript library” which can be loaded into a page to recreate panels in an external web app (relevant github issue here).

If you are willing to use something other than grafana, you can connect to influxdb using a js driver like influxdb-nodejs or influent to get the data and then use a plotting library (eg flot, plotly, d3, smoothie charts) to re-create the charts. This would be the typical approach to the problem, but it does require a more development time on your part.

If you do want to use grafana, however, you can now embed a grafana panel into an external app using an iframe like so:

<iframe 
    src="https://snapshot.raintank.io/dashboard-solo/snapshot/y7zwi2bZ7FcoTlB93WN7yWO4aMiz3pZb?from=1493369923321&to=1493377123321&panelId=4" 
    width="650" height="300" frameborder="0">
</iframe>

References

How to make a choice between OpenTSDB and InfluxDB or other TSDS? [closed]

How to make a choice between OpenTSDB and InfluxDB or other TSDS? [closed]

They both are open source distributed time series databases, OpenTSDB for metrics, InfluxDB for metrics and events with no external dependencies, on the other OpenTSDB based on HBase.
Any other comparation between them?
And if I want to store and query|analyze metrics real-time with no deterioration loss based on time series, which would be better?

Solutions/Answers:

Solution 1:

At one of the conferences I’ve heard people running something like Graphite/OpenTSDB for collecting metrics centrally and InfluxDB locally on each server to collect metrics only for this server. (InfluxDB was chosen for local storage as it is easy to deploy and lightweight on memory).

This is not directly related to your question but the idea appealed to me much so I wanted to share it.

Solution 2:

Warp 10 is another option worth considering (I’m part of the team building it), check it out at http://www.warp10.io/.

It is based on HBase but also has a standalone version which will work fine for volumes in the low 100s billions of datapoints, so it should fit most use cases out there.

Among the strengths of Warp 10 is the WarpScript language which is built from the ground up for manipulating (Geo) Time Series.

Solution 3:

Yet another open-source option is blueflood: http://blueflood.io.

Disclaimer: like Paul Dix, I’m biased by the fact that I work on Blueflood.

Based on your short list of requirements, I’d say Blueflood is a good fit. Perhaps if you can specify the size of your dataset, the type of analysis you need to run or any other requirements that you think make your project unique, we could help steer you towards a more precise answer. Without knowing more about what you want to do, it’s going to be hard for us to answer more meaningfully.

References

select from InfluxDB where value is null

select from InfluxDB where value is null

If my data (conceptually) is:
# a b c
——-
1 1 1
2 1 1 0
3 1 0 1

Then in legacy SQL language, the statement would be:
select * from table where b is null

I cannot find a similar condition within the InfluxDB Query Language documentation.
I am working with data where there is optionally a numeric value in a column, and I want to select records where this column is empty/null. Since these are integers, they appear not to work with the matching regexes at all, so something like where !~ /.*/ is out.

Solutions/Answers:

Solution 1:

InfluxDB doesn’ understand NULL and will show error if use is null or is not null in the query. In order to find something which is like null we need to look for empty space i.e. use empty single quotes as

SELECT * FROM service_detail where username != ''

Solution 2:

You cannot search for nulls in InfluxDB <0.9. You will not be able to insert nulls in Influx >=0.9

Solution 3:

For fields where there exists at least one “invalid” value (for example, a negative size in bytes) you can create a query which allows you to find rows with missing data, without modifying the stored data.

I have a metric with 5 fields: mac, win, win64, linux, and linux64, not every field is filled in in every row, and on occasion a row will not be added due to it having no data available at the time.

By first querying the data with a fill() clause set to my invalid value: -1 in a subquery, I can then wrap that in an outer query to find either rows which are missing at least one column (using OR between WHERE expressions) or rows with no data at all (using AND between WHERE expressions).

The subquery looks like this:

SELECT count(*) FROM "firefox" GROUP BY time(1d) fill(-1)

That gives me all of my rows (there’s one per day) with a 1 (the count of the occurrences of that field for the day) or a -1 (missing) as the value returned for each field.

I can then choose the rows that have no data from that with an outer query like this (note in this case the returned fields are all -1 and therefore uninteresting and can be hidden in your visualizer, like Grafana):

SELECT * from (_INNER_QUERY_HERE_) WHERE count_linux = -1 AND count_linux64 = -1 AND count_mac = -1 AND count_win = -1 AND count_win64 = -1;

Or I can choose rows with at least one missing field like this:

SELECT * from (_INNER_QUERY_HERE_) WHERE count_linux = -1 OR count_linux64 = -1 OR count_mac = -1 OR count_win = -1 OR count_win64 = -1;

There is still room for improvement though, you have to specify the field names in the outer query manually whereas something like WHERE * = -1 would be much nicer. Also depending on the size of your data this query will be SLOOOOOOW, and filtering by time is very confusing when you use nested queries. Obviously it’d be nicer if the influx folks just added is null or not null or some similar syntax to influxql, but as has been linked above they don’t seem too interested in doing so.

References

InfluxDB storage size on disk

InfluxDB storage size on disk

All I want is simply to know how much space my InfluxDB database takes on my HDD. The stats() command gives me dozens of numbers but I don’t know which one shows what I want.

Solutions/Answers:

Solution 1:

Stats output does not contain that information. The size of the directory structure on disk will give that info.

du -sh /var/lib/influxdb/data/<db name>

Where /var/lib/influxdb/data is the data directory defined in influxdb.conf.

References

How to install InfluxDB in Windows

How to install InfluxDB in Windows

I am new to InfluxDB. I could not find any details about installing InfluxDB on Windows. Is there any way to install it on a Windows machine or do I need to use a Linux server for development purposes?

Solutions/Answers:

Solution 1:

The current 0.9 branch of influxdb is pure go and can be compiled on Windows with the following commands:

cd %GOPATH%/src/github.com/influxdb
go get -u -f ./...
go build ./...

Of course you will need go (>1.4), git and hg.

If you do not want to compile your own version, you can also find here my own Windows x86 binaries for v0.9.0-rc11:
https://github.com/adriencarbonne/influxdb/releases/download/v0.9.0-rc11/influxdb_v0.9.0-rc11.zip

To run InfluxDB, type: influxd.exe.

Or even better, create the following config file, save it as influxdb.conf and run influxd --config influxdb.conf:

reporting-disabled = true

#[logging]
#level = "debug"
#file = "influxdb.log"

[admin]
enabled = true
port = 8083

[api]
port = 8086

[data]
dir = "data"

[broker]
dir = "broker"

Solution 2:

I struggled quite a lot with this issue, so I’ll post the full process step by step. This will hopefully help other people that lands on this post.

Table of contents:

Edit: WARNING, this doesn’t work if Go and projects folder are installed to a custom path (not c:\go). In this case go get breaks with cryptic messages about unrecognized import paths (thanks to user626528 for the info)

  1. PREVIOUS DOWNLOADS
  2. COMPILATION
  3. EXECUTION

1. PREVIOUS DOWNLOADS

Go for Windows (get the .msi):
https://golang.org/dl/

GIT for Windows:
http://git-scm.com/download/win


2. COMPILATION

cd to C:\Go

Create our $GOPATH in “C:\Go\projects” (anywhere but C:\Go\src, which is the $GOROOT).

> mkdir projects

Set to $GOPATH variable to this new directory:

> set GOPATH=C:\Go\projects

Pull the influxdb code from github into our $GOPATH:

> go get github.com/influxdata/influxdb

cd to C:\Go\projects\github.com\influxdata\influxdb

Pull the project dependencies:

> go get -u -f ./...

Finally, build the code:

> go build ./...

…this will create 3 executables under C:\Go\projects\bin:

influx.exe 
influxd.exe
urlgen.exe

3. EXECUTION

To start the service:

influxd -config influxdb.conf

For that, you first need to create a influxdb.conf file with the following text:

reporting-disabled = true

#[logging]
#level = "debug"
#file = "influxdb.log"
#write-tracing = false

[admin]
enabled = true
port = 8083

[api]
port = 8086

[data]
dir = "data"

[broker]
dir = "broker"

Once the service is started, you can execute Chrome and go to http://localhost:8083, and start playing with InfluxDb.

Default values for username and password are:

username: root
password: root

Solution 3:

Few updates to Xavier Peña solution to build latest influxdb. Notice the difference in github URL and the path.

C:\Go\projects>go get github.com/influxdata/influxdb

C:\Go\projects>go get github.com/sparrc/gdm

C:\Go\projects>cd C:\Go\projects\src\github.com\influxdata\influxdb

C:\Go\projects\src\github.com\influxdata\influxdb>go get -u -f ./...

C:\Go\projects\src\github.com\influxdata\influxdb>c:\Go\projects\bin\gdm.exe restore

C:\Go\projects\src\github.com\influxdata\influxdb>go build ./...

C:\Go\projects\src\github.com\influxdata\influxdb>go install ./...

C:\Go\projects\bin>influxd config > influxdb.generated.conf

C:\Go\projects\bin>influxd -config influxdb.generated.conf

Solution 4:

Windows if officially supported. Go to https://portal.influxdata.com/downloads and download it from there.

Solution 5:

The current 0.9 branch of influxdb is pure go and can be compiled on Windows. The main prerequisites are go 1.4, git (e.g. tortoisegit together with msysgit), hg (e.g. tortoisehg).

Using this setup I’ve successfully compiled and run influxdb on Win7 x64.

Solution 6:

There wasn’t an influxdb Windows version at Sep 30 ’14, there were are only Linux and OSX versions.

Update: Current 0.9 version at present 04/09/2015 have a win version.

References

Export data from InfluxDB

Export data from InfluxDB

Is there a way (plugin or tool) to export the data from the database (or database itself) ? I’m looking for this feature as I need to migrate a DB from present host to another one.

Solutions/Answers:

Solution 1:

You could dump each table and load them through REST interface:

curl "http://hosta:8086/db/dbname/series?u=root&p=root&q=select%20*%20from%20series_name%3B" > series_name.json
curl -XPOST -d @series_name.json "http://hostb:8086/db/dbname/series?u=root&p=root"

Or, maybe you want to add new host to cluster? It’s easy and you’ll get master-master replica for free. Cluster Setup

Solution 2:

Export data:

sudo service influxdb start (Or leave this step if service is already running)
influxd backup -database grpcdb /opt/data  

grpcdb is name of DB and back up will be saved under /opt/data directory in this case.

Import Data:

sudo service influxdb stop  (Service should not be running)
influxd restore -metadir /var/lib/influxdb/meta /opt/data
influxd restore -database grpcdb -datadir /var/lib/influxdb/data /opt/data
sudo service influxdb start

Solution 3:

As ezotrank says, you can dump each table. There’s a missing “-d” in ezotrank’s answer though. It should be:

curl "http://hosta:8086/db/dbname/series?u=root&p=root&q=select%20*%20from%20series_name%3B" > series_name.json
curl -XPOST -d @series_name.json "http://hostb:8086/db/dbname/series?u=root&p=root"

(Ezotrank, sorry, I would’ve just posted a comment directly on your answer, but I don’t have enough reputation points to do that yet.)

Solution 4:

If I use curl, I get timeouts, and if I use influxd backup its not in a format I can read.

I’m getting fine results like this:

influx -host influxdb.mydomain.com -database primary -format csv -execute "select time,value from \"continuous\" where channel='ch123'" > outtest.csv

Solution 5:

From 1.5 onwards, the InfluxDB OSS backup utility provides a newer option which is much more convenient:

-portable: Generates backup files in the newer InfluxDB Enterprise-compatible format. Highly recommended for all InfluxDB OSS users

Export

To back up everything:

influxd backup -portable <path-to-backup>

To backup only the myperf database:

influxd backup -portable -database myperf <path-to-backup>

Import

To restore all databases found within the backup directory:

influxd restore -portable <path-to-backup>

To restore only the myperf database (myperf database must not exist):

influxd restore -portable -db myperf <path-to-backup>

Additional options include specifying timestamp , shard etc. See all the other supported options here.

Solution 6:

If you have access to the machine running Influx db I would say use the influx_inspect command. The command is simple and very fast. It will dump your db in line protocol. You can then import this dump using influx -import command.

References

In Influxdb, How to delete all measurements?

In Influxdb, How to delete all measurements?

I know DROP MEASUREMENT measurement_name used to drop single measurement. How to delete all measurements at once ?

Solutions/Answers:

Solution 1:

Theres no way to drop all of the measurements directly, but the query below will achieve the same result.

DROP SERIES FROM /.*/

Solution 2:

For those who are looking to use WHERE clause with DROP SERIES, here it is:

DROP SERIES FROM /.*/ WHERE "your-tag" = 'tag-value-to-delete-data'

Please notice the quotation marks on after the WHERE clause, they should be like this for DROP SERIES; as of InfluxDB v.1.3.

Else, you might get error like ERR: invalid expression: ...

Solution 3:

use “DELETE Measurement” from InfluxDB admin Panel

References

Can you delete data from influxdb?

Can you delete data from influxdb?

How do you delete data from influxdb?
The documentation shows it should be as simple as:
delete from foo where time < now() -1h For some reason, influxdb rejects my delete statements saying "Delete queries can't have where clause that doesn't reference time" select * from bootstrap where duration > 1000 and time > 14041409940s and time < now() I want to delete these 5 entries whos duration > 1000 seconds

This should be a valid sql statement, yet it fails

None of these delete statements work either
delete from bootstrap where duration > 3000000″

delete from bootstrap where duration > 300000″

delete from bootstrap where time = 1404140994043″

delete from bootstrap where duration > 300000 and time > 1404141054508 ”

delete from bootstrap where duration > 300000 and time > 1404141054508s ”

delete from bootstrap where time > 1404141054508s and duration > 300000 ”

delete from bootstrap where duration > 30000 and time > 1s”

Documentation reference
http://influxdb.com/docs/v0.8/api/query_language.html
Update
Additional queries
delete from bootstrap where time > 1404141416824 and duration > 3000;
delete sequence_number from bootstrap where time > 1s and duration > 1000;

Maybe this is a bug?
https://github.com/influxdb/influxdb/issues/975
https://github.com/influxdb/influxdb/issues/84

Solutions/Answers:

Solution 1:

It appears that you can do this in influxdb 0.9. For instance, here’s a query that just succeeded for me:

DROP SERIES FROM temperature WHERE machine='zagbar'

(Per generous comment by @MuratCorlu, I’m reposting my earlier comment as an answer…)

Solution 2:

With influx, you can only delete by time

For example, the following are invalid:

#Wrong
DELETE FROM foo WHERE time < '2014-06-30' and duration > 1000 #Can't delete if where clause has non time entity

This is how I was able to delete the data

DELETE FROM foo WHERE time > '2014-06-30' and time < '2014-06-30 15:16:01'

Update: this worked on influx 8. Supposedly it doesn’t work on influx 9

Solution 3:

I’m surprised that nobody has mentioned InfluxDB retention policies for automatic data removal. You can set a default retention policy and also set them on a per-database level.

From the docs:

CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [DEFAULT]

Solution 4:

Because InfluxDB is a bit painful about deletes, we use a schema that has a boolean field called “ForUse”, which looks like this when posting via the line protocol (v0.9):

your_measurement,your_tag=foo ForUse=TRUE,value=123.5 1262304000000000000

You can overwrite the same measurement, tag key, and time with whatever field keys you send, so we do “deletes” by setting “ForUse” to false, and letting retention policy keep the database size under control.

Since the overwrite happens seamlessly, you can retroactively add the schema too. Noice.

Solution 5:

You can only delete with your time field, which is a number.

Delete from <measurement> where time=123456

will work. Remember not to give single quotes or double quotes. Its a number.

Solution 6:

The accepted answer (DROP SERIES) will work for many cases, but will not work if the records you need to delete are distributed among many time ranges and tag sets.

A more general purpose approach (albeit a slower one) is to issue the delete queries one-by-one, with the use of another programming language.

  1. Query for all the records you need to delete (or use some filtering logic in your script)
  2. For each of the records you want to delete:

    1. Extract the time and the tag set (ignore the fields)
    2. Format this into a query, e.g.

      DELETE FROM "things" WHERE time=123123123 AND tag1='val' AND tag2='val'
      

      Send each of the queries one at a time

References

How to format time in influxdb select query

How to format time in influxdb select query

I am new to InfluxDB. I am querying data in admin ui. I see time as timestamp. Is it possible to see it formatted as date and time?

Solutions/Answers:

Solution 1:

You can select RFC 3339 formatting by entering the following command in the CLI:

precision rfc3339

Solution 2:

Although tierry answered the question already, here is the link to the documentation as well:

precision ‘rfc3339|h|m|s|ms|u|ns’

Specifies the format/precision of the timestamp: rfc3339 (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ), h (hours), m (minutes), s (seconds), ms (milliseconds), u (microseconds), ns (nanoseconds). Precision defaults to nanoseconds.

https://docs.influxdata.com/influxdb/v1.5/tools/shell/#influx-arguments

Solution 3:

To convert influxdb timestamp to normal timestamp you can type on console:

influx -precision rfc3339

Now try with your query it should work.

For more details follow link :
https://www.influxdata.com/blog/tldr-influxdb-tech-tips-august-4-2016/

Solution 4:

The Web Admin Interface was deprecated as of InfluxDB 1.1 (disabled by default).

The precision of the timestamp can be controlled to return hours (h), minutes (m), seconds (s), milliseconds (ms), microseconds (u) or nanoseconds (ns). A special precision option is RFC3339 which returns the timestamp in RFC3339 format with nanosecond precision. The mechanism for specifying the desired time precision is different for the CLI and HTTP API.

To set the precision in CLI, you write precision <RFC3339|h|m|s|ms|us|ns> in the command line depending on what precision you want. The default value of the precision for the CLI is nanoseconds.

To set the precision in HTTP API, you pass epoch=<h|m|s|ms|us|ns> as a query parameter. The default value of the precision is RFC3339.

References

Usecases: InfluxDB vs. Prometheus [closed]

Usecases: InfluxDB vs. Prometheus [closed]

Following the Prometheus webpage one main difference between Prometheus and InfluxDB is the usecase: while Prometheus stores time series only InfluxDB is better geared towards storing individual events. Since there was some major work done on the storage engine of InfluxDB I wonder if this is still true.
I want to setup a time series database and apart from the push/push model (and probably a difference in performance) I can see no big thing which separates both projects. Can someone explain the difference in usecases?

Solutions/Answers:

Solution 1:

InfluxDB CEO and developer here. The next version of InfluxDB (0.9.5) will have our new storage engine. With that engine we’ll be able to efficiently store either single event data or regularly sampled series. i.e. Irregular and regular time series.

InfluxDB supports int64, float64, bool, and string data types using different compression schemes for each one. Prometheus only supports float64.

For compression, the 0.9.5 version will have compression competitive with Prometheus. For some cases we’ll see better results since we vary the compression on timestamps based on what we see. Best case scenario is a regular series sampled at exact intervals. In those by default we can compress 1k points timestamps as an 8 byte starting time, a delta (zig-zag encoded) and a count (also zig-zag encoded).

Depending on the shape of the data we’ve seen < 2.5 bytes per point on average after compactions.

YMMV based on your timestamps, the data type, and the shape of the data. Random floats with nanosecond scale timestamps with large variable deltas would be the worst, for instance.

The variable precision in timestamps is another feature that InfluxDB has. It can represent second, millisecond, microsecond, or nanosecond scale times. Prometheus is fixed at milliseconds.

Another difference is that writes to InfluxDB are durable after a success response is sent to the client. Prometheus buffers writes in memory and by default flushes them every 5 minutes, which opens a window of potential data loss.

Our hope is that once 0.9.5 of InfluxDB is released, it will be a good choice for Prometheus users to use as long term metrics storage (in conjunction with Prometheus). I’m pretty sure that support is already in Prometheus, but until the 0.9.5 release drops it might be a bit rocky. Obviously we’ll have to work together and do a bunch of testing, but that’s what I’m hoping for.

For single server metrics ingest, I would expect Prometheus to have better performance (although we’ve done no testing here and have no numbers) because of their more constrained data model and because they don’t append writes to disk before writing out the index.

The query language between the two are very different. I’m not sure what they support that we don’t yet or visa versa so you’d need to dig into the docs on both to see if there’s something one can do that you need. Longer term our goal is to have InfluxDB’s query functionality be a superset of Graphite, RRD, Prometheus and other time series solutions. I say superset because we want to cover those in addition to more analytic functions later on. It’ll obviously take us time to get there.

Finally, a longer term goal for InfluxDB is to support high availability and horizontal scalability through clustering. The current clustering implementation isn’t feature complete yet and is only in alpha. However, we’re working on it and it’s a core design goal for the project. Our clustering design is that data is eventually consistent.

To my knowledge, Prometheus’ approach is to use double writes for HA (so there’s no eventual consistency guarantee) and to use federation for horizontal scalability. I’m not sure how querying across federated servers would work.

Within an InfluxDB cluster, you can query across the server boundaries without copying all the data over the network. That’s because each query is decomposed into a sort of MapReduce job that gets run on the fly.

There’s probably more, but that’s what I can think of at the moment.

Solution 2:

We’ve got the marketing message from the two companies in the other answers. Now let’s ignore it and get back to the sad real world of time-data series.

Some History

InfluxDB and prometheus were made to replace old tools from the past era (RRDtool, graphite).

InfluxDB is a time series database. Prometheus is a sort-of metrics collection and alerting tool, with a storage engine written just for that. (I’m actually not sure you could [or should] reuse the storage engine for something else)

Limitations

Sadly, writing a database is a very complex undertaking. The only way both these tools manage to ship something is by dropping all the hard features relating to high-availability and clustering.

To put it bluntly, it’s a single application running only a single node.

Prometheus has no goal to support clustering and replication whatsoever. The official way to support failover is to “run 2 nodes and send data to both of them“. Ouch. (Note that it’s seriously the ONLY existing way possible, it’s written countless times in the official documentation).

InfluxDB has been talking about clustering for years… until it was officially abandoned in March. Clustering ain’t on the table anymore for InfluxDB. Just forget it. When it will be done (supposing it ever is) it will only be available in the Enterprise Edition.

https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/

Within the next few years, we will hopefully have a well-engineered time-series database that is handling all the hard problems relating to databases: replication, failover, data safety, scalability, backup…

At the moment, there is no silver bullet.

What to do

Evaluate the volume of data to be expected.

100 metrics * 100 sources * 1 second => 10000 datapoints per second => 864 Mega-datapoints per day.

The nice thing about times series databases is that they use a compact format, they compress well, they aggregate datapoints, and they clean old data. (Plus they come with features relevant to time data series.)

Supposing that a datapoint is treated as 4 bytes, that’s only a few Gigabytes per day. Lucky for us, there are systems with 10 cores and 10 TB drives readily available. That could probably run on a single node.

The alternative is to use a classic NoSQL database (Cassandra, ElasticSearch or Riak) then engineer the missing bits in the application. These databases may not be optimized for that kind of storage (or are they? modern databases are so complex and optimized, can’t know for sure unless benchmarked).

You should evaluate the capacity required by your application. Write a proof of concept with these various databases and measures things.

See if it falls within the limitations of InfluxDB. If so, it’s probably the best bet. If not, you’ll have to make your own solution on top of something else.

Solution 3:

InfluxDB simply cannot hold production load (metrics) from 1000 servers. It has some real problems with data ingestion and ends up stalled/hanged and unusable. We tried to use it for a while but once data amount reached some critical level it could not be used anymore. No memory or cpu upgrades helped.
Therefore our experience is definitely avoid it, it’s not mature product and has serious architectural design problems. And I am not even talking about sudden shift to commercial by Influx.

Next we researched Prometheus and while it required to rewrite queries it now ingests 4 times more metrics without any problems whatsoever compared to what we tried to feed to Influx. And all that load is handled by single Prometheus server, it’s fast, reliable, and dependable. This is our experience running huge international internet shop under pretty heavy load.

Solution 4:

IIRC current Prometheus implementation is designed around all the data fitting on a single server. If you have gigantic quantities of data, it may not all fit in Prometheus.

References