Open Source MindManager AI Integration, Win + Mac
Voting Open
Looking for AI integration? Check out my open source project: MindManager AI.
This approach uses Python and works on both Windows and MacOS, supporting all major LLMs. It's not only an AI integration but also a possible way to automate MindManager without programming ugly VBA or AppleScript directly.
For the AI integration itself you need either an API key or a local LLM.
Note: This is not an out-of-the-box solution. Some basic understanding of starting Python scripts is required.
If you encounter any difficulties, feel free to reach out. I’m happy to improve the documentation based on your feedback.
Use cases by now (using LLMs eg. GPT-4o):
- Refinement of the map or topic.
- Refinement of the map or topic from a development perspective.
- Create examples for one, more (selected) or all topics.
- Clustering topics from scratch.
- Clustering by one or more criterias eg. Organization/Process/Project/Expertise, Capex-Opex perspective.
- Complex cases (multiple calls): eg. refinement + clustering + examples
Tested LLMs:
- Azure OpenAI w/ GPT-4o (use your key) -> best tested
- OpenAI w/ GPT-4o (use your key) -> best results
- Anthropic w/ Claude 3 (use your key)
- Groq (platform) w/ LLama3 (use your key)
- Perplexity (platform) w/ LLama3 (use your key)
- Google Gemini w/ Pro and Flash (use your key)
- Google Vertex AI w/ Gemini Pro and Gemini Flash (use your access token)
- Ollama (local) w/ any LLM (use LLama3, Zephyr or Mixtral model for best results)
- MLX (local w/ Apple Silicon) w/ any LLM (use LLama3 model for best results)
Hello MindManager Community,
This Idea has been thoroughly vetted and approved before it was published.
Best regards,
-Marian
Hello MindManager Community,
This Idea has been thoroughly vetted and approved before it was published.
Best regards,
-Marian
I extended the solution today with image generation by DALL-E-3 (Azure / OpenAI) or Stable Diffusion 3 (Stability AI).
Prompt is built from selected topics or central topic. Image generation takes some time though.
I'm always open to your suggestions or contributions. As you know, developer library support is limited on both platforms, more so on macOS than on Windows.
I extended the solution today with image generation by DALL-E-3 (Azure / OpenAI) or Stable Diffusion 3 (Stability AI).
Prompt is built from selected topics or central topic. Image generation takes some time though.
I'm always open to your suggestions or contributions. As you know, developer library support is limited on both platforms, more so on macOS than on Windows.
i appreciate you effort however some people in the anki community could use the web version i.e. chatgpt instead of the API.
the USD20 subscription include all you can eat amount of prompts (30/hr?) with max toxen of 8-16k? (i am not sure because i seldom use that much text).
however, the API is like buy 1 eat 1, it hurts my purse while the text is displaying on the screen.
yes, you supported local open source LLMs, however, obviously, GPT4 and quite many closed source LLMs are those that are most capable nowadays.
so, pls see if could adapt the technique to use web-all you can eat version instead of API.
afair, they capture the login cookie/token so they can use the web version.
thank you.
i appreciate you effort however some people in the anki community could use the web version i.e. chatgpt instead of the API.
the USD20 subscription include all you can eat amount of prompts (30/hr?) with max toxen of 8-16k? (i am not sure because i seldom use that much text).
however, the API is like buy 1 eat 1, it hurts my purse while the text is displaying on the screen.
yes, you supported local open source LLMs, however, obviously, GPT4 and quite many closed source LLMs are those that are most capable nowadays.
so, pls see if could adapt the technique to use web-all you can eat version instead of API.
afair, they capture the login cookie/token so they can use the web version.
thank you.
Hello MindManager Community,
This Idea has been thoroughly vetted and approved before it was published.
Best regards,
-Marian
Hello MindManager Community,
This Idea has been thoroughly vetted and approved before it was published.
Best regards,
-Marian
Added an example action to generate a glossary of all special terms on a map.
Output (markdown) must not exceed 4000 tokens by now (not characters or words, but tokens) as the result is generated by a single LLM call. The smaller markdown code is after the call converted to html.
For this animation the new "Anthropic Claude 3.5 Sonnet"-model was used. This model seems to be best in class by now.
Added an example action to generate a glossary of all special terms on a map.
Output (markdown) must not exceed 4000 tokens by now (not characters or words, but tokens) as the result is generated by a single LLM call. The smaller markdown code is after the call converted to html.
For this animation the new "Anthropic Claude 3.5 Sonnet"-model was used. This model seems to be best in class by now.
---