-
Notifications
You must be signed in to change notification settings - Fork 3.5k
bug fixes for motia-content-creation #186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -106,4 +106,8 @@ temp/ | |
| .Spotlight-V100 | ||
| .Trashes | ||
| ehthumbs.db | ||
| Thumbs.db | ||
| Thumbs.db | ||
|
|
||
| # Typescript | ||
| types.d.ts | ||
| package-lock.json | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -18,17 +18,20 @@ | |||||||||||||||||||||||||||||||||||||
| "web-scraping" | ||||||||||||||||||||||||||||||||||||||
| ], | ||||||||||||||||||||||||||||||||||||||
| "dependencies": { | ||||||||||||||||||||||||||||||||||||||
| "motia": "^0.4.0-beta.90", | ||||||||||||||||||||||||||||||||||||||
| "@mendable/firecrawl-js": "^1.0.0", | ||||||||||||||||||||||||||||||||||||||
| "openai": "^4.90.0", | ||||||||||||||||||||||||||||||||||||||
| "axios": "^1.10.0", | ||||||||||||||||||||||||||||||||||||||
| "dotenv": "^16.5.0", | ||||||||||||||||||||||||||||||||||||||
| "zod": "^3.25.67", | ||||||||||||||||||||||||||||||||||||||
| "axios": "^1.10.0" | ||||||||||||||||||||||||||||||||||||||
| "install": "^0.13.0", | ||||||||||||||||||||||||||||||||||||||
| "motia": "^0.4.0-beta.90", | ||||||||||||||||||||||||||||||||||||||
| "openai": "^4.90.0", | ||||||||||||||||||||||||||||||||||||||
| "or": "^0.2.0", | ||||||||||||||||||||||||||||||||||||||
| "pnpm": "^10.15.0", | ||||||||||||||||||||||||||||||||||||||
| "zod": "^3.25.67" | ||||||||||||||||||||||||||||||||||||||
| }, | ||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+22
to
30
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainPrune likely-unused/supply-chain-risk dependencies from runtime deps. Given the migration to Ollama in Python steps, these runtime deps look unnecessary in the Node package and can enlarge the attack surface:
Move truly-needed tools to devDependencies or remove entirely. Run this script to confirm actual usage before removal: If unused, apply this diff: "dependencies": {
"@mendable/firecrawl-js": "^1.0.0",
- "axios": "^1.10.0",
"dotenv": "^16.5.0",
- "install": "^0.13.0",
"motia": "^0.4.0-beta.90",
- "openai": "^4.90.0",
- "or": "^0.2.0",
- "pnpm": "^10.15.0",
"zod": "^3.25.67"
},🏁 Script executed: #!/bin/bash
# Verify usage of suspicious deps across the repo
set -euo pipefail
echo "Searching for imports/usages..."
rg -n -C2 -g '!**/node_modules/**' -g '!**/dist/**' -g '!**/build/**' \
-e '\bfrom\s+["'\'']axios["'\'']' \
-e '\brequire\(["'\'']axios["'\'']\)' \
-e '\bfrom\s+["'\'']openai["'\'']' \
-e '\brequire\(["'\'']openai["'\'']\)' \
-e '\bfrom\s+["'\'']pnpm["'\'']' \
-e '\brequire\(["'\'']pnpm["'\'']\)' \
-e '\bfrom\s+["'\'']install["'\'']' \
-e '\brequire\(["'\'']install["'\'']\)' \
-e '\bfrom\s+["'\'']or["'\'']' \
-e '\brequire\(["'\'']or["'\'']\)' \
motia-content-creationLength of output: 1385 Remove unused dependencies and retain only truly required runtime packages The usage scan confirms:
Apply this diff to motia-content-creation/package.json: "dependencies": {
"@mendable/firecrawl-js": "^1.0.0",
"axios": "^1.10.0",
"dotenv": "^16.5.0",
"motia": "^0.4.0-beta.90",
- "openai": "^4.90.0",
- "install": "^0.13.0",
- "or": "^0.2.0",
- "pnpm": "^10.15.0",
"zod": "^3.25.67"
},This pruning reduces supply-chain risk without impacting any runtime code. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||
| "devDependencies": { | ||||||||||||||||||||||||||||||||||||||
| "@types/node": "^20.17.28", | ||||||||||||||||||||||||||||||||||||||
| "@types/react": "^18.3.23", | ||||||||||||||||||||||||||||||||||||||
| "ts-node": "^10.9.2", | ||||||||||||||||||||||||||||||||||||||
| "typescript": "^5.8.3" | ||||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -1,15 +1,14 @@ | ||||||
| import os | ||||||
| import json | ||||||
| import ollama | ||||||
| import asyncio | ||||||
| from pydantic import BaseModel, HttpUrl | ||||||
| from datetime import datetime | ||||||
| from dotenv import load_dotenv | ||||||
| from openai import AsyncOpenAI | ||||||
|
|
||||||
| load_dotenv() | ||||||
|
|
||||||
| OPENAI_API_KEY = os.getenv('OPENAI_API_KEY') | ||||||
|
|
||||||
| openai_client = AsyncOpenAI(api_key=OPENAI_API_KEY) | ||||||
| OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'deepseek-r1') | ||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Changed the ollama model to more memory efficient Qwen3 instead of Deepseek-R1
Suggested change
|
||||||
|
|
||||||
| class GenerateInput(BaseModel): | ||||||
| requestId: str | ||||||
|
|
@@ -35,21 +34,24 @@ async def handler(input, context): | |||||
|
|
||||||
| linkedinPrompt = linkedinPromptTemplate.replace('{{title}}', input['title']).replace('{{content}}', input['content']) | ||||||
|
|
||||||
| context.logger.info("🔄 LinkedIn content generation started...") | ||||||
|
|
||||||
| linkedin_content = await openai_client.chat.completions.create( | ||||||
| model="gpt-4o", | ||||||
| messages=[{'role': 'user', 'content': linkedinPrompt}], | ||||||
| temperature=0.7, | ||||||
| max_tokens=2000, | ||||||
| response_format={'type': 'json_object'} | ||||||
| context.logger.info(f"🔄 LinkedIn content generation started using Ollama model: {OLLAMA_MODEL}...") | ||||||
| response = ollama.chat( | ||||||
| model=OLLAMA_MODEL, | ||||||
| messages=[{'role': 'user', 'content': linkedinPrompt}], | ||||||
| options={ | ||||||
| 'temperature': 0.7, | ||||||
| 'num_predict': 2000 | ||||||
| } | ||||||
| ) | ||||||
|
|
||||||
|
|
||||||
| response_content = response['message']['content'] | ||||||
| context.logger.info(f"Received raw response from Ollama: {response_content[:100]}...") | ||||||
|
|
||||||
| try: | ||||||
| linkedin_content = json.loads(linkedin_content.choices[0].message.content) | ||||||
| linkedin_content = json.loads(response['message']['content']) | ||||||
| except Exception: | ||||||
| linkedin_content = {'text': linkedin_content.choices[0].message.content} | ||||||
|
|
||||||
| linkedin_content = {'text': response['message']['content']} | ||||||
| context.logger.info(f"🎉 LinkedIn content generated successfully!") | ||||||
|
|
||||||
| await context.emit({ | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -3,13 +3,11 @@ | |||||
| from pydantic import BaseModel, HttpUrl | ||||||
| from datetime import datetime | ||||||
| from dotenv import load_dotenv | ||||||
| from openai import AsyncOpenAI | ||||||
| import ollama | ||||||
|
|
||||||
| load_dotenv() | ||||||
|
|
||||||
| OPENAI_API_KEY = os.getenv('OPENAI_API_KEY') | ||||||
|
|
||||||
| openai_client = AsyncOpenAI(api_key=OPENAI_API_KEY) | ||||||
| OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'deepseek-r1') | ||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Change the default ollama model to Qwen3 instad of Deepseek-R1
Suggested change
|
||||||
|
|
||||||
| class GenerateInput(BaseModel): | ||||||
| requestId: str | ||||||
|
|
@@ -37,18 +35,19 @@ async def handler(input, context): | |||||
|
|
||||||
| context.logger.info("🔄 Twitter content generation started...") | ||||||
|
|
||||||
| twitter_content = await openai_client.chat.completions.create( | ||||||
| model="gpt-4o", | ||||||
| messages=[{'role': 'user', 'content': twitterPrompt}], | ||||||
| temperature=0.7, | ||||||
| max_tokens=2000, | ||||||
| response_format={'type': 'json_object'} | ||||||
| ) | ||||||
| twitter_content = ollama.chat( | ||||||
| model=OLLAMA_MODEL, | ||||||
| messages=[{'role': 'user', 'content': twitterPrompt}], | ||||||
| options={ | ||||||
| 'temperature': 0.7, | ||||||
| 'num_predict': 2000 | ||||||
| } | ||||||
| ) | ||||||
|
|
||||||
| try: | ||||||
| twitter_content = json.loads(twitter_content.choices[0].message.content) | ||||||
| twitter_content = json.loads(twitter_content['message']['content']) | ||||||
| except Exception: | ||||||
| twitter_content = {'text': twitter_content.choices[0].message.content} | ||||||
| twitter_content = {'text': twitter_content['message']['content']} | ||||||
|
|
||||||
| context.logger.info(f"🎉 Twitter content generated successfully!") | ||||||
|
|
||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,6 +1,6 @@ | ||
| import os | ||
| from pydantic import BaseModel, HttpUrl | ||
| from firecrawl import FirecrawlApp | ||
| from firecrawl import Firecrawl | ||
| from datetime import datetime | ||
| from dotenv import load_dotenv | ||
|
|
||
|
|
@@ -26,15 +26,15 @@ class ScrapeInput(BaseModel): | |
| async def handler(input, context): | ||
| context.logger.info(f"🕷️ Scraping article: {input['url']}") | ||
|
|
||
| app = FirecrawlApp(api_key=FIRECRAWL_API_KEY) | ||
| firecrawl = Firecrawl(api_key=FIRECRAWL_API_KEY) | ||
|
|
||
| scrapeResult = app.scrape_url(input['url']) | ||
| scrapeResult = firecrawl.scrape(input['url'], formats=["markdown"]) | ||
|
Comment on lines
+29
to
+31
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Guard missing FIRECRAWL_API_KEY and avoid blocking the event loop; also prefer a neutral client name
Apply: - firecrawl = Firecrawl(api_key=FIRECRAWL_API_KEY)
-
- scrapeResult = firecrawl.scrape(input['url'], formats=["markdown"])
+ if not FIRECRAWL_API_KEY:
+ raise RuntimeError("FIRECRAWL_API_KEY is not set. Configure it in your environment.")
+
+ client = Firecrawl(api_key=FIRECRAWL_API_KEY)
+
+ # Offload sync HTTP call to a worker thread to avoid blocking the event loop
+ scrapeResult = await asyncio.to_thread(client.scrape, str(input['url']), formats=["markdown"])Add outside this range: import asyncio # at the top with other imports🤖 Prompt for AI Agents |
||
|
|
||
| if not scrapeResult.success: | ||
| raise Exception(f"Firecrawl scraping failed: {scrapeResult.error}") | ||
| if not hasattr(scrapeResult, 'markdown'): | ||
| raise Exception(f"Firecrawl scraping failed: No content returned") | ||
|
|
||
| content = scrapeResult.markdown | ||
| title = scrapeResult.metadata.get('title', 'Untitled Article') | ||
| content = scrapeResult.markdown or '' | ||
| title = getattr(scrapeResult.metadata, 'title', 'Untitled Article') if hasattr(scrapeResult, 'metadata') else 'Untitled Article' | ||
|
|
||
| context.logger.info(f"✅ Successfully scraped: {title} ({len(content) if content else 0} characters)") | ||
|
|
||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.