|
| 1 | +# Symfony Hugging Face Examples |
| 2 | + |
| 3 | +This directory contains various examples of how to use the Symfony AI with [Hugging Face](https://huggingface.co/) |
| 4 | +and sits on top of the [Hugging Face Inference API](https://huggingface.co/inference-api). |
| 5 | + |
| 6 | +The Hugging Face Hub provides access to a wide range of pre-trained open source models for various AI tasks, which you |
| 7 | +can directly use via Symfony AI's Hugging Face Platform Bridge. |
| 8 | + |
| 9 | +## Getting Started |
| 10 | + |
| 11 | +Hugging Face offers a free tier for their Inference API, which you can use to get started. Therefore, you need to create |
| 12 | +an account on [Hugging Face](https://huggingface.co/join), generate an |
| 13 | +[access token](https://huggingface.co/settings/tokens), and add it to your `.env.local` file in the root of the |
| 14 | +examples' directory as `HUGGINGFACE_KEY`. |
| 15 | + |
| 16 | +```bash |
| 17 | +echo 'HUGGINGFACE_KEY=hf_your_access_key' >> .env.local |
| 18 | +``` |
| 19 | + |
| 20 | +Different to other platforms, Hugging Face provides close to 50.000 models for various AI tasks, which enables you to |
| 21 | +easily try out different, specialized models for your use case. Common use cases can be found in this example directory. |
| 22 | + |
| 23 | +## Running the Examples |
| 24 | + |
| 25 | +You can run an example by executing the following command: |
| 26 | + |
| 27 | +```bash |
| 28 | +# Run all example with runner: |
| 29 | +./runner huggingface |
| 30 | + |
| 31 | +# Or run a specific example standalone, e.g., object detection: |
| 32 | +php huggingface/object-detection.php |
| 33 | +``` |
| 34 | + |
| 35 | +## Available Models |
| 36 | + |
| 37 | +When running the examples, you might experience that some models are not available, and you encounter an error like: |
| 38 | + |
| 39 | +``` |
| 40 | +Model, provider or task not found (404). |
| 41 | +``` |
| 42 | + |
| 43 | +This can happen due to pre-selected models in the examples not being available anymore or not being "warmed up" on |
| 44 | +Hugging Face's side. You can change the model used in the examples by updating the model name in the example script. |
| 45 | + |
| 46 | +To find available models for a specific task, you can check out the [Hugging Face Model Hub](https://huggingface.co/models) |
| 47 | +and filter by the desired task, or you can use the `huggingface/_model-listing.php` script. |
| 48 | + |
| 49 | +### Listing Available Models |
| 50 | + |
| 51 | +List _all_ models: |
| 52 | + |
| 53 | +```bash |
| 54 | +php huggingface/_model-listing.php |
| 55 | +``` |
| 56 | +(This is limited to 1000 results by default.) |
| 57 | + |
| 58 | +Limit models to a specific _task_, e.g., object-detection: |
| 59 | + |
| 60 | +```bash |
| 61 | +php huggingface/_model-listing.php --task=object-detection |
| 62 | +``` |
| 63 | + |
| 64 | +Limit models to a specific _provider_, e.g., "hf-inference": |
| 65 | + |
| 66 | +```bash |
| 67 | +# Single provider: |
| 68 | +php huggingface/_model-listing.php --provider=hf-inference |
| 69 | + |
| 70 | +# Multiple providers: |
| 71 | +php huggingface/_model-listing.php --provider=sambanova,novita |
| 72 | +``` |
| 73 | + |
| 74 | +Search for models matching a specific term, e.g., "gpt": |
| 75 | + |
| 76 | +```bash |
| 77 | +php huggingface/_model-listing.php --search=gpt |
| 78 | +``` |
| 79 | + |
| 80 | +Limit models to currently warm models: |
| 81 | + |
| 82 | +```bash |
| 83 | +php huggingface/_model-listing.php --warm |
| 84 | +``` |
| 85 | + |
| 86 | +You can combine task and provider filters, task and warm filters, but not provider and warm filters. |
| 87 | + |
| 88 | +```bash |
| 89 | +# Combine provider and task: |
| 90 | +php huggingface/_model-listing.php --provider=hf-inference --task=object-detection |
| 91 | + |
| 92 | +# Combine task and warm: |
| 93 | +php huggingface/_model-listing.php --task=object-detection --warm |
| 94 | + |
| 95 | +# Search for warm gpt model for text-generation: |
| 96 | +php huggingface/_model-listing.php --warm --task=text-generation --search=gpt |
| 97 | +``` |
| 98 | + |
| 99 | +### Model Information |
| 100 | + |
| 101 | +To get detailed information about a specific model, you can use the `huggingface/_model-info.php` script: |
| 102 | + |
| 103 | +```bash |
| 104 | +php huggingface/_model-info.php google/vit-base-patch16-224 |
| 105 | + |
| 106 | +Hugging Face Model Information |
| 107 | +============================== |
| 108 | + |
| 109 | + Model: google/vit-base-patch16-224 |
| 110 | + ----------- ----------------------------- |
| 111 | + ID google/vit-base-patch16-224 |
| 112 | + Downloads 2985836 |
| 113 | + Likes 889 |
| 114 | + Task image-classification |
| 115 | + Warm yes |
| 116 | + ----------- ----------------------------- |
| 117 | + |
| 118 | + Inference Provider: |
| 119 | + ----------------- ----------------------------- |
| 120 | + Provider hf-inference |
| 121 | + Status live |
| 122 | + Provider ID google/vit-base-patch16-224 |
| 123 | + Task image-classification |
| 124 | + Is Model Author no |
| 125 | + ----------------- ----------------------------- |
| 126 | +``` |
| 127 | + |
| 128 | +Important to understand is what you can use a model for and its availability on different providers. |
0 commit comments