@@ -69,7 +69,13 @@ Model inference with CLDK starts with a local LLM server. We'll use Ollama to ho
6969 ```
7070
7171=== "macOS"
72- On macOS, Ollama runs automatically after installation. You can verify it's running by opening Activity Monitor and searching for "ollama".
72+ On macOS, Ollama runs automatically after installation.
73+
74+ You can check the status with:
75+ ```shell
76+ launchctl list | grep "ollama"
77+ ```
78+
7379
7480## Step 2: Pull the code LLM.
7581
@@ -83,6 +89,16 @@ Model inference with CLDK starts with a local LLM server. We'll use Ollama to ho
8389 ollama run granite-code:8b-instruct ' Write a function to print hello world in python'
8490 ` ` `
8591
92+ You should see a response like:
93+ ` ` ` shell
94+ ❯ ollama run granite-code:8b-instruct ' Write a function to print hello world in python'
95+ ` ` ` python
96+ def say_hello ():
97+ print(" Hello World!" )
98+ ```
99+ ```
100+
101+
86102# # Step 3: Download Sample Codebase
87103
88104We' ll use Apache Commons CLI as our example Java project:
@@ -106,7 +122,7 @@ export JAVA_APP_PATH=/path/to/commons-cli-1.7.0
106122
107123Let' s build a pipeline that analyzes Java methods using LLMs. Create a new file ` code_summarization.py` :
108124
109- ` ` ` python title=" code_summarization.py" linenums=" 1" hl_lines=" 7 10 12-17 21-22 24-25 34-37 "
125+ ` ` ` python title=" code_summarization.py" linenums=" 1" hl_lines=" 7 10 12-17 24-25 27-28 39 "
110126import ollama
111127from cldk import CLDK
112128from pathlib import Path
@@ -124,6 +140,9 @@ for file_path, class_file in analysis.get_symbol_table().items():
124140 for type_name, type_declaration in class_file.type_declarations.items():
125141 # Iterate over methods
126142 for method in type_declaration.callable_declarations.values(): # (3)!
143+ # Skip constructors
144+ if method.is_constructor:
145+ continue
127146 # Get code body
128147 code_body = Path(file_path).absolute().resolve().read_text()
129148
@@ -143,7 +162,7 @@ for file_path, class_file in analysis.get_symbol_table().items():
143162 # Prompt Ollama
144163 summary = ollama.generate(
145164 model=" granite-code:8b-instruct" , # (6)!
146- prompt=instruction).get(" response" ) # (7)!
165+ prompt=instruction).get(" response" )
147166
148167 # Print output
149168 print(f" \n Method: {method.declaration}" )
@@ -156,8 +175,7 @@ for file_path, class_file in analysis.get_symbol_table().items():
1561753. In a nested loop, we can quickly iterate over the methods in the project and extract the code body.
1571764. CLDK comes with a number of treesitter based utilities that can be used to extract and manipulate code snippets.
1581775. We use the ` sanitize_focal_class()` method to extract the focal class for the method and sanitize any unwanted code in just one line of code.
159- 6. Try your favorite model for code summarization. We use the `granite-code:8b-instruct` model in this example.
160- 7. We prompt Ollama with the sanitized class and method declaration to generate a summary for the method.
178+ 6. We use the ` granite-code:8b-instruct` model in this example. Try a different model from [Ollama model library](https://ollama.com/library).
161179---
162180
163181# ## Running `code_summarization.py`
0 commit comments