You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the official Python SDK for Kern AI, your IDE for programmatic data enrichment and management.
7
+
This is the official Python SDK for [*refinery*](https://github.com/code-kern-ai/refinery), your **open-source**data-centric IDE for NLP.
8
8
9
9
## Installation
10
10
11
-
You can set up this library via either running `$ pip install kern-sdk`, or via cloning this repository and running `$ pip install -r requirements.txt` in this repository.
11
+
You can set up this SDK either via running `$ pip install kern-sdk`, or by cloning this repository and running `$ pip install -r requirements.txt`.
12
12
13
13
## Usage
14
-
Once you installed the package, you can access the application from any Python terminal as follows:
14
+
15
+
### Creating a `Client` object
16
+
Once you installed the package, you can create a `Client` object from any Python terminal as follows:
15
17
16
18
```python
17
19
from kern import Client
18
20
19
-
username="your-username"
21
+
user_name="your-username"
20
22
password ="your-password"
21
23
project_id ="your-project-id"# can be found in the URL of the web application
22
24
23
-
client = Client(username, password, project_id)
25
+
client = Client(user_name, password, project_id)
24
26
# if you run the application locally, please use the following instead:
The `project_id` can be found in your browser, e.g. if you run the app on your localhost: `http://localhost:4455/app/projects/{project_id}/overview`
31
+
28
32
Alternatively, you can provide a `secrets.json` file in your directory where you want to run the SDK, looking as follows:
29
33
```json
30
34
{
@@ -33,26 +37,140 @@ Alternatively, you can provide a `secrets.json` file in your directory where you
33
37
"project_id": "your-project-id"
34
38
}
35
39
```
36
-
Again, if you run on your local machine, you should also provide `"uri": "http://localhost:4455"`. Afterwards, you can access the client like this:
40
+
41
+
Again, if you run on your localhost, you should also provide `"uri": "http://localhost:4455"`. Afterwards, you can access the client like this:
42
+
37
43
```python
38
44
client = Client.from_secrets_file("secrets.json")
39
45
```
40
46
47
+
With the `Client`, you easily integrate your data into any kind of system; may it be a custom implementation, an AutoML system or a plain data analytics framework 🚀
48
+
49
+
### Fetching labeled data
50
+
41
51
Now, you can easily fetch the data from your project:
42
52
```python
43
-
df = client.get_record_export()
53
+
df = client.get_record_export(tokenize=False)
54
+
# if you set tokenize=True (default), the project-specific
55
+
# spaCy tokenizer will process your textual data
44
56
```
45
57
46
58
Alternatively, you can also just run `kern pull` in your CLI given that you have provided the `secrets.json` file in the same directory.
47
59
48
-
The `df` contains data of the following scheme:
49
-
- all your record attributes are stored as columns, e.g. `headline` or `running_id` if you uploaded records like `{"headline": "some text", "running_id": 1234}`
50
-
- per labeling task three columns:
51
-
-`<attribute_name|None>__<labeling_task_name>__MANUAL`: those are the manually set labels of your records
52
-
-`<attribute_name|None>__<labeling_task_name>__WEAK SUPERVISION`: those are the weakly supervised labels of your records
53
-
-`<attribute_name|None>__<labeling_task_name>__WEAK SUPERVISION_confidence`: those are the probabilities or your weakly supervised labels
60
+
The `df` contains both your originally uploaded data (e.g. `headline` and `running_id` if you uploaded records like `{"headline": "some text", "running_id": 1234}`), and a triplet for each labeling task you create. This triplet consists of the manual labels, the weakly supervised labels, and their confidence. For extraction tasks, this data is on token-level.
61
+
62
+
An example export file looks like this:
63
+
```json
64
+
[
65
+
{
66
+
"running_id": "0",
67
+
"Headline": "T. Rowe Price (TROW) Dips More Than Broader Markets",
In this example, there is no manual label, but a weakly supervised label `"Negative"` has been set with 62.2% confidence.
77
+
78
+
### Fetch lookup lists
79
+
-[ ] Todo
80
+
81
+
### Upload files
82
+
-[ ] Todo
83
+
84
+
### Adapters
85
+
86
+
#### Rasa
87
+
*refinery* is perfect to be used for building chatbots with [Rasa](https://github.com/RasaHQ/rasa). We've built an adapter with which you can easily create the required Rasa training data directly from *refinery*.
88
+
89
+
To do so, do the following:
90
+
91
+
```python
92
+
from kern.adapter import rasa
93
+
94
+
rasa.build_intent_yaml(
95
+
client,
96
+
"text",
97
+
"__intent__WEAK_SUPERVISION"
98
+
)
99
+
```
100
+
101
+
This will create a `.yml` file looking as follows:
102
+
103
+
```yml
104
+
nlu:
105
+
- intent: check_balance
106
+
examples: |
107
+
- how much do I have on my savings account
108
+
- how much money is in my checking account
109
+
- What's the balance on my credit card account
110
+
```
111
+
112
+
If you want to provide a metadata-level label (such as sentiment), you can provide the optional argument `metadata_label_task`:
This will not only inject the label names on token-level, but also creates lookup lists for your chatbot:
152
+
153
+
```yml
154
+
nlu:
155
+
- intent: check_balance
156
+
metadata:
157
+
sentiment: neutral
158
+
examples: |
159
+
- how much do I have on my [savings](account) account
160
+
- how much money is in my [checking](account) account
161
+
- What's the balance on my [credit card account](account)
162
+
- lookup: account
163
+
examples: |
164
+
- savings
165
+
- checking
166
+
- credit card account
167
+
```
168
+
169
+
Please make sure to also create the further necessary files (`domain.yml`, `data/stories.yml` and `data/rules.yml`) if you want to train your Rasa chatbot. For further reference, see their [documentation](https://rasa.com/docs/rasa).
170
+
171
+
### What's missing?
172
+
Let us know what open-source/closed-source NLP framework you are using, for which you'd like to have an adapter implemented in the SDK. To do so, simply create an issue in this repository with the tag "enhancement".
54
173
55
-
With the `client`, you easily integrate your data into any kind of system; may it be a custom implementation, an AutoML system or a plain data analytics framework 🚀
56
174
57
175
## Roadmap
58
176
- [ ] Register heuristics via wrappers
@@ -66,7 +184,6 @@ If you want to have something added, feel free to open an [issue](https://github
66
184
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
67
185
68
186
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
69
-
Don't forget to give the project a star! Thanks again!
70
187
71
188
1. Fork the Project
72
189
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
0 commit comments