Translation Fails or Hangs - protospatial/NodeToCode GitHub Wiki
Troubleshooting → Translation Fails or Hangs
Common Error Responses
When using cloud-based LLM providers (Anthropic, OpenAI, DeepSeek), you may encounter these standard HTTP error responses:
Tip: Check the Output Log with MinSeverity set to Debug
to see detailed error responses.
Code | Type | Description | Resolution |
---|---|---|---|
400 | invalid_request_error | Issue with request format/content | Check plugin settings and reduce complexity if needed |
401 | authentication_error | Invalid or missing API key | Verify API key in Project Settings |
403 | permission_error | Insufficient permissions | Check API key permissions and usage tier |
404 | not_found_error | Resource not found | Verify endpoint URLs and model availability |
413 | request_too_large | Request exceeds size limits | Reduce reference files or translation depth |
429 | rate_limit_error | Rate limit exceeded | Wait before retrying or check usage quotas |
500 | api_error | Provider internal error | Wait and retry; check provider status |
529 | overloaded_error | Service temporarily overloaded | Wait and retry later |
Translation Timeout Issues
If your translation appears to hang indefinitely:
[!NOTE] Node to Code's default timeout is 1000 seconds, but translations typically complete within 30-60 seconds depending on graph complexity and LLM provider & model.
Check for:
- Network connectivity issues
- LLM provider service status
- Token limit exceeded
- Request size too large
Resolution Steps:
- Check the Output Log for timeout messages
- Verify internet connection
- Confirm LLM provider status
- Reduce translation complexity if needed:
- Lower translation depth
- Remove unnecessary reference files
- Split large graphs into smaller components
- Use a model with larger context window
Ollama-Specific Issues
When using local LLM processing through Ollama, unique issues may arise:
-
Service Connection
# Verify Ollama is running with: curl http://localhost:11434/api/version
- Check endpoint configuration in Plugin Settings
- Ensure Ollama service is running
- Verify network access if running on remote machine
-
Model Management
# List installed models ollama list # Pull required model if missing ollama pull codellama:code
- Confirm model is installed
- Check model name matches Plugin Settings
- Verify model compatibility
-
Resource Constraints
- Monitor system resources:
- System RAM usage
- CPU utilization
- GPU availability and VRAM usage (if using GPU acceleration)
- Disk space for model storage
- Monitor system resources:
-
Common Ollama Errors
- You may see errors pertaining to the following:
- Failed to connect to Ollama service
- Model not found
- Out of memory
- GPU not available
Resolution Steps:
- Restart Ollama service
- Free up system resources
- Check Ollama logs for detailed error messages
- Refer to Ollama's documentation and/or community for further troubleshooting
- You may see errors pertaining to the following:
General Troubleshooting Steps
- Check Logs
- Review Output Log for specific error messages
- Check HTTP response codes
- Monitor token usage warnings
- Look for timeout indicators
[!NOTE] Set MinSeverity to Debug in Plugin Settings for maximum logging detail during troubleshooting.
-
Validate Configuration
- Confirm plugin settings
- Test API connectivity
- Verify file paths for reference files
- Check model selection and availability
-
Reduce Complexity
- Lower translation depth
- Remove reference files
- Simplify Blueprint graph
- Split into smaller translations
[!NOTE] Tip: Start with a simple graph translation to establish baseline functionality before attempting more complex translations.
Prevention Tips
-
Regular Maintenance
- Keep API keys updated
- Monitor token usage and costs
- Update plugin regularly
- Clean up reference files
-
Best Practices
- Start with simple translations
- Gradually increase complexity
- Document successful configurations
- Maintain backup API keys
[!NOTE] Keep reference files under 50KB - 100KB total (depending on LLM context window) for optimal performance and reduced token usage.
When to Seek Support
If issues persist after trying the above solutions, prepare the following information before submitting an issue in the Node to Code Discord:
- Complete error messages from Output Log
- Plugin version
- Unreal Engine version
- LLM provider and model
- Sample Blueprint causing issues (if possible)
- Steps to reproduce the problem
- Any relevant plugin settings
Remember that most translation issues can be resolved by:
- Checking error responses in the Output Log
- Validating configuration and credentials
- Ensuring proper resource availability
- Starting with simpler translations to isolate issues. See Blueprint Selection for guidance.