Skip to content

Commit 31a2998

Browse files
Merge pull request #60 from data-integrations/bugfix-ui/CDAP-15389-fix-markdown-tables
[CDAP-15389] Fixes markdown tables to be standard representation
2 parents 153cb76 + 5668554 commit 31a2998

File tree

8 files changed

+172
-168
lines changed

8 files changed

+172
-168
lines changed

kafka-plugins-0.10/docs/Kafka-batchsink.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -41,18 +41,19 @@ using compression type 'gzip'. The written events will be written in csv format
4141
to kafka running at localhost. The Kafka partition will be decided based on the provided key 'ts'.
4242
Additional properties like number of acknowledgements and client id can also be provided.
4343

44-
45-
{
46-
"name": "Kafka",
47-
"type": "batchsink",
48-
"properties": {
49-
"referenceName": "Kafka",
50-
"brokers": "localhost:9092",
51-
"topic": "alarm",
52-
"async": "FALSE",
53-
"compressionType": "gzip",
54-
"format": "CSV",
55-
"kafkaProperties": "acks:2,client.id:myclient",
56-
"key": "message"
57-
}
44+
```json
45+
{
46+
"name": "Kafka",
47+
"type": "batchsink",
48+
"properties": {
49+
"referenceName": "Kafka",
50+
"brokers": "localhost:9092",
51+
"topic": "alarm",
52+
"async": "FALSE",
53+
"compressionType": "gzip",
54+
"format": "CSV",
55+
"kafkaProperties": "acks:2,client.id:myclient",
56+
"key": "message"
5857
}
58+
}
59+
```

kafka-plugins-0.10/docs/Kafka-batchsource.md

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -73,36 +73,36 @@ on brokers host1.example.com:9092 and host2.example.com:9092. The source will ad
7373
a field named 'key' which will have the message key in it. It parses the Kafka messages
7474
using the 'csv' format with 'user', 'item', 'count', and 'price' as the message schema.
7575

76-
{
77-
"name": "Kafka",
78-
"type": "streamingsource",
79-
"properties": {
80-
"topics": "purchases",
81-
"brokers": "host1.example.com:9092,host2.example.com:9092",
82-
"format": "csv",
83-
"keyField": "key",
84-
"schema": "{
85-
\"type\":\"record\",
86-
\"name\":\"purchase\",
87-
\"fields\":[
88-
{\"name\":\"key\",\"type\":\"bytes\"},
89-
{\"name\":\"user\",\"type\":\"string\"},
90-
{\"name\":\"item\",\"type\":\"string\"},
91-
{\"name\":\"count\",\"type\":\"int\"},
92-
{\"name\":\"price\",\"type\":\"double\"}
93-
]
94-
}"
95-
}
76+
```json
77+
{
78+
"name": "Kafka",
79+
"type": "streamingsource",
80+
"properties": {
81+
"topics": "purchases",
82+
"brokers": "host1.example.com:9092,host2.example.com:9092",
83+
"format": "csv",
84+
"keyField": "key",
85+
"schema": "{
86+
\"type\":\"record\",
87+
\"name\":\"purchase\",
88+
\"fields\":[
89+
{\"name\":\"key\",\"type\":\"bytes\"},
90+
{\"name\":\"user\",\"type\":\"string\"},
91+
{\"name\":\"item\",\"type\":\"string\"},
92+
{\"name\":\"count\",\"type\":\"int\"},
93+
{\"name\":\"price\",\"type\":\"double\"}
94+
]
95+
}"
9696
}
97+
}
98+
```
9799

98100
For each Kafka message read, it will output a record with the schema:
99101

100-
+================================+
101-
| field name | type |
102-
+================================+
103-
| key | bytes |
104-
| user | string |
105-
| item | string |
106-
| count | int |
107-
| price | double |
108-
+================================+
102+
| field name | type |
103+
| ----------- | ---------------- |
104+
| key | bytes |
105+
| user | string |
106+
| item | string |
107+
| count | int |
108+
| price | double |

kafka-plugins-0.10/docs/Kafka-streamingsource.md

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -77,42 +77,42 @@ batch when the record was read. It will also contain a field named 'key' which w
7777
the message key in it. It parses the Kafka messages using the 'csv' format
7878
with 'user', 'item', 'count', and 'price' as the message schema.
7979

80-
{
81-
"name": "Kafka",
82-
"type": "streamingsource",
83-
"properties": {
84-
"topics": "purchases",
85-
"brokers": "host1.example.com:9092,host2.example.com:9092",
86-
"format": "csv",
87-
"timeField": "readTime",
88-
"keyField": "key",
89-
"schema": "{
90-
\"type\":\"record\",
91-
\"name\":\"purchase\",
92-
\"fields\":[
93-
{\"name\":\"readTime\",\"type\":\"long\"},
94-
{\"name\":\"key\",\"type\":\"bytes\"},
95-
{\"name\":\"user\",\"type\":\"string\"},
96-
{\"name\":\"item\",\"type\":\"string\"},
97-
{\"name\":\"count\",\"type\":\"int\"},
98-
{\"name\":\"price\",\"type\":\"double\"}
99-
]
100-
}"
101-
}
80+
```json
81+
{
82+
"name": "Kafka",
83+
"type": "streamingsource",
84+
"properties": {
85+
"topics": "purchases",
86+
"brokers": "host1.example.com:9092,host2.example.com:9092",
87+
"format": "csv",
88+
"timeField": "readTime",
89+
"keyField": "key",
90+
"schema": "{
91+
\"type\":\"record\",
92+
\"name\":\"purchase\",
93+
\"fields\":[
94+
{\"name\":\"readTime\",\"type\":\"long\"},
95+
{\"name\":\"key\",\"type\":\"bytes\"},
96+
{\"name\":\"user\",\"type\":\"string\"},
97+
{\"name\":\"item\",\"type\":\"string\"},
98+
{\"name\":\"count\",\"type\":\"int\"},
99+
{\"name\":\"price\",\"type\":\"double\"}
100+
]
101+
}"
102102
}
103+
}
104+
```
103105

104106
For each Kafka message read, it will output a record with the schema:
105107

106-
+================================+
107-
| field name | type |
108-
+================================+
109-
| readTime | long |
110-
| key | bytes |
111-
| user | string |
112-
| item | string |
113-
| count | int |
114-
| price | double |
115-
+================================+
108+
| field name | type |
109+
| ----------- | ---------------- |
110+
| readTime | long |
111+
| key | bytes |
112+
| user | string |
113+
| item | string |
114+
| count | int |
115+
| price | double |
116116

117117
Note that the readTime field is not derived from the Kafka message, but from the time that the
118118
message was read.

kafka-plugins-0.10/docs/KafkaAlerts-alertpublisher.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -27,13 +27,14 @@ This example publishes alerts to already existing kafka topic alarm as json obje
2727
The kafka broker is running at localhost and port 9092. Additional kafka producer properties
2828
are like acks and client.id are specified as well.
2929

30-
31-
{
32-
"name": "Kafka",
33-
"type": "alertpublisher",
34-
"properties": {
35-
"brokers": "localhost:9092",
36-
"topic": "alarm",
37-
"producerProperties": "acks:2,client.id:myclient"
38-
}
30+
```json
31+
{
32+
"name": "Kafka",
33+
"type": "alertpublisher",
34+
"properties": {
35+
"brokers": "localhost:9092",
36+
"topic": "alarm",
37+
"producerProperties": "acks:2,client.id:myclient"
3938
}
39+
}
40+
```

kafka-plugins-0.8/docs/Kafka-batchsink.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -36,18 +36,19 @@ using compression type 'gzip'. The written events will be written in csv format
3636
to kafka running at localhost. The Kafka partition will be decided based on the provided key 'ts'.
3737
Additional properties like number of acknowledgements and client id can also be provided.
3838

39-
40-
{
41-
"name": "Kafka",
42-
"type": "batchsink",
43-
"properties": {
44-
"referenceName": "Kafka",
45-
"brokers": "localhost:9092",
46-
"topic": "alarm",
47-
"async": "FALSE",
48-
"compressionType": "gzip",
49-
"format": "CSV",
50-
"kafkaProperties": "acks:2,client.id:myclient",
51-
"key": "message"
52-
}
39+
```json
40+
{
41+
"name": "Kafka",
42+
"type": "batchsink",
43+
"properties": {
44+
"referenceName": "Kafka",
45+
"brokers": "localhost:9092",
46+
"topic": "alarm",
47+
"async": "FALSE",
48+
"compressionType": "gzip",
49+
"format": "CSV",
50+
"kafkaProperties": "acks:2,client.id:myclient",
51+
"key": "message"
5352
}
53+
}
54+
```

kafka-plugins-0.8/docs/Kafka-batchsource.md

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -61,36 +61,36 @@ on brokers host1.example.com:9092 and host2.example.com:9092. The source will ad
6161
a field named 'key' which will have the message key in it. It parses the Kafka messages
6262
using the 'csv' format with 'user', 'item', 'count', and 'price' as the message schema.
6363

64-
{
65-
"name": "Kafka",
66-
"type": "streamingsource",
67-
"properties": {
68-
"topics": "purchases",
69-
"brokers": "host1.example.com:9092,host2.example.com:9092",
70-
"format": "csv",
71-
"keyField": "key",
72-
"schema": "{
73-
\"type\":\"record\",
74-
\"name\":\"purchase\",
75-
\"fields\":[
76-
{\"name\":\"key\",\"type\":\"bytes\"},
77-
{\"name\":\"user\",\"type\":\"string\"},
78-
{\"name\":\"item\",\"type\":\"string\"},
79-
{\"name\":\"count\",\"type\":\"int\"},
80-
{\"name\":\"price\",\"type\":\"double\"}
81-
]
82-
}"
83-
}
64+
```json
65+
{
66+
"name": "Kafka",
67+
"type": "streamingsource",
68+
"properties": {
69+
"topics": "purchases",
70+
"brokers": "host1.example.com:9092,host2.example.com:9092",
71+
"format": "csv",
72+
"keyField": "key",
73+
"schema": "{
74+
\"type\":\"record\",
75+
\"name\":\"purchase\",
76+
\"fields\":[
77+
{\"name\":\"key\",\"type\":\"bytes\"},
78+
{\"name\":\"user\",\"type\":\"string\"},
79+
{\"name\":\"item\",\"type\":\"string\"},
80+
{\"name\":\"count\",\"type\":\"int\"},
81+
{\"name\":\"price\",\"type\":\"double\"}
82+
]
83+
}"
8484
}
85+
}
86+
```
8587

8688
For each Kafka message read, it will output a record with the schema:
8789

88-
+================================+
89-
| field name | type |
90-
+================================+
91-
| key | bytes |
92-
| user | string |
93-
| item | string |
94-
| count | int |
95-
| price | double |
96-
+================================+
90+
| field name | type |
91+
| ----------- | ---------------- |
92+
| key | bytes |
93+
| user | string |
94+
| item | string |
95+
| count | int |
96+
| price | double |

kafka-plugins-0.8/docs/Kafka-streamingsource.md

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -74,42 +74,42 @@ batch when the record was read. It will also contain a field named 'key' which w
7474
the message key in it. It parses the Kafka messages using the 'csv' format
7575
with 'user', 'item', 'count', and 'price' as the message schema.
7676

77-
{
78-
"name": "Kafka",
79-
"type": "streamingsource",
80-
"properties": {
81-
"topics": "purchases",
82-
"brokers": "host1.example.com:9092,host2.example.com:9092",
83-
"format": "csv",
84-
"timeField": "readTime",
85-
"keyField": "key",
86-
"schema": "{
87-
\"type\":\"record\",
88-
\"name\":\"purchase\",
89-
\"fields\":[
90-
{\"name\":\"readTime\",\"type\":\"long\"},
91-
{\"name\":\"key\",\"type\":\"bytes\"},
92-
{\"name\":\"user\",\"type\":\"string\"},
93-
{\"name\":\"item\",\"type\":\"string\"},
94-
{\"name\":\"count\",\"type\":\"int\"},
95-
{\"name\":\"price\",\"type\":\"double\"}
96-
]
97-
}"
98-
}
77+
```json
78+
{
79+
"name": "Kafka",
80+
"type": "streamingsource",
81+
"properties": {
82+
"topics": "purchases",
83+
"brokers": "host1.example.com:9092,host2.example.com:9092",
84+
"format": "csv",
85+
"timeField": "readTime",
86+
"keyField": "key",
87+
"schema": "{
88+
\"type\":\"record\",
89+
\"name\":\"purchase\",
90+
\"fields\":[
91+
{\"name\":\"readTime\",\"type\":\"long\"},
92+
{\"name\":\"key\",\"type\":\"bytes\"},
93+
{\"name\":\"user\",\"type\":\"string\"},
94+
{\"name\":\"item\",\"type\":\"string\"},
95+
{\"name\":\"count\",\"type\":\"int\"},
96+
{\"name\":\"price\",\"type\":\"double\"}
97+
]
98+
}"
9999
}
100+
}
101+
```
100102

101103
For each Kafka message read, it will output a record with the schema:
102104

103-
+================================+
104-
| field name | type |
105-
+================================+
106-
| readTime | long |
107-
| key | bytes |
108-
| user | string |
109-
| item | string |
110-
| count | int |
111-
| price | double |
112-
+================================+
105+
| field name | type |
106+
| ----------- | ---------------- |
107+
| readTime | long |
108+
| key | bytes |
109+
| user | string |
110+
| item | string |
111+
| count | int |
112+
| price | double |
113113

114114
Note that the readTime field is not derived from the Kafka message, but from the time that the
115115
message was read.

kafka-plugins-0.8/docs/KafkaAlerts-alertpublisher.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,14 @@ This example publishes alerts to already existing kafka topic alarm as json obje
2323
The kafka broker is running at localhost and port 9092. Additional kafka producer properties
2424
are like acks and client.id are specified as well.
2525

26-
27-
{
28-
"name": "Kafka",
29-
"type": "alertpublisher",
30-
"properties": {
31-
"brokers": "localhost:9092",
32-
"topic": "alarm",
33-
"producerProperties": "acks:2,client.id:myclient"
34-
}
26+
```json
27+
{
28+
"name": "Kafka",
29+
"type": "alertpublisher",
30+
"properties": {
31+
"brokers": "localhost:9092",
32+
"topic": "alarm",
33+
"producerProperties": "acks:2,client.id:myclient"
3534
}
35+
}
36+
```

0 commit comments

Comments
 (0)