Skip to content

Commit 95219dd

Browse files
authored
Fix format of spark architecture page
1 parent 31521f9 commit 95219dd

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

docs/backend/spark-architecture.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ The red lines are the path for the queries that come in through the PubSub, the
1515
### Data processing
1616

1717
Bullet can accept arbitrary sources of data as long as they can be ingested by Spark. They can be Kafka, Flume, Kinesis, and TCP sockets etc. In order to hook up your data to Bullet Spark, you just need to implement the [Data Producer Trait](https://github.com/bullet-db/bullet-spark/blob/master/src/main/scala/com/yahoo/bullet/spark/DataProducer.scala). In your implementation, you can either:
18+
1819
* Use [Spark Streaming built-in sources](https://spark.apache.org/docs/latest/streaming-programming-guide.html#input-dstreams-and-receivers) to receive data. Below is a quick example for a direct Kafka source in Scala. You can also write it in Java:
1920

2021
```scala

0 commit comments

Comments
 (0)