Open Source MindManager AI Integration, Win + Mac

RZa shared this idea 5 months ago
Voting Open

Looking for AI integration? Check out my open source project: MindManager AI.

This approach uses Python and works on both Windows and MacOS, supporting all major LLMs. It's not only an AI integration but also a possible way to automate MindManager without programming ugly VBA or AppleScript directly.

For the AI integration itself you need either an API key or a local LLM.

Note: This is not an out-of-the-box solution. Some basic understanding of starting Python scripts is required.

If you encounter any difficulties, feel free to reach out. I’m happy to improve the documentation based on your feedback.

Use cases by now (using LLMs eg. GPT-4o):

  • Refinement of the map or topic.
  • Refinement of the map or topic from a development perspective.
  • Create examples for one, more (selected) or all topics.
  • Clustering topics from scratch.
  • Clustering by one or more criterias eg. Organization/Process/Project/Expertise, Capex-Opex perspective.
  • Complex cases (multiple calls): eg. refinement + clustering + examples

Tested LLMs:

  • Azure OpenAI w/ GPT-4o (use your key) -> best tested
  • OpenAI w/ GPT-4o (use your key) -> best results
  • Anthropic w/ Claude 3 (use your key)
  • Groq (platform) w/ LLama3 (use your key)
  • Perplexity (platform) w/ LLama3 (use your key)
  • Google Gemini w/ Pro and Flash (use your key)
  • Google Vertex AI w/ Gemini Pro and Gemini Flash (use your access token)
  • Ollama (local) w/ any LLM (use LLama3, Zephyr or Mixtral model for best results)
  • MLX (local w/ Apple Silicon) w/ any LLM (use LLama3 model for best results)

Best Answer
photo

Hello MindManager Community,

This Idea has been thoroughly vetted and approved before it was published.

Best regards,

-Marian

Replies (4)

photo
2

I extended the solution today with image generation by DALL-E-3 (Azure / OpenAI) or Stable Diffusion 3 (Stability AI).

Prompt is built from selected topics or central topic. Image generation takes some time though.


e94f48c3cff16083a2a8967da5623416

I'm always open to your suggestions or contributions. As you know, developer library support is limited on both platforms, more so on macOS than on Windows.

photo
1

btw, afair, mindmanager mac dont support winwrap macro, isn't it?

how do you write this?

thank you.

ps: i am poor and i'll never have macos and i am against their close source idea (even microsoft is more open source then them, SJ is indeed a very very SF person)

photo
2

If you go to my github repo you can read in the documentation that I'm using the AppScript Python library, which wraps Apple Script (the worst language I've ever seen). Python scripts are started (including parameters) with Apple Automator which integrates in the MindManager menubar.

photo
1

i appreciate you effort however some people in the anki community could use the web version i.e. chatgpt instead of the API.


the USD20 subscription include all you can eat amount of prompts (30/hr?) with max toxen of 8-16k? (i am not sure because i seldom use that much text).


however, the API is like buy 1 eat 1, it hurts my purse while the text is displaying on the screen.


yes, you supported local open source LLMs, however, obviously, GPT4 and quite many closed source LLMs are those that are most capable nowadays.


so, pls see if could adapt the technique to use web-all you can eat version instead of API.

afair, they capture the login cookie/token so they can use the web version.


thank you.

photo
1

Thank you for your comment.

Unfortunately, grabbing the token in this manner is against the rules of the AI vendors and it's not fair too from my point of view.

Using chat APIs is indeed very cheap. If you are using an API within your company you can even more save costs by using the API together with your collegues.

Have a nice day.

photo
1

Hello MindManager Community,

This Idea has been thoroughly vetted and approved before it was published.

Best regards,

-Marian

photo
1

Added an example action to generate a glossary of all special terms on a map.

Output (markdown) must not exceed 4000 tokens by now (not characters or words, but tokens) as the result is generated by a single LLM call. The smaller markdown code is after the call converted to html.

For this animation the new "Anthropic Claude 3.5 Sonnet"-model was used. This model seems to be best in class by now.


0050ecf859a1b4b6926465e25379d407

photo
1

i use Claude 3 opus, opus is the paid version of sonnet....... obviously sonnet is the free one and what do you mean by it's the best now?


thank you.

photo
1

btw, i have done that easily before using python and winwrap macros.

the only problem is the paywall.

each press will cost you money is very differ from a fixed USD20/mo.


may be you and i are in comfort,

but imagine someone in 3rd world, no water no food no eletricity,

they walk KMs away to pick a bucket of water, then pull a rotor for an hour to get eletricity.

may be they have to hunt wild animals too.


i am already excluded those who are suffering in WARS.


so i still ask FOR THEM and myself if it's possible to use the all-you-can-eat plan instead.

thank you.

photo
2

Anthropic released the new Claude 3.5 Sonnet model yesterday (2024-06-20) on their developer platform.

Anthropic points out that this model can outperform GPT-4o and others despite being cheaper. Before, Anthropic had with Claude 3 the models Opus, Sonnet, and Haiku, with Haiku being the cheapest.

I understand your concern about the costs, but my main interest is in evaluating each AI platform and connecting it to MindManager easily.

If you want free or cheap AI resources you can look at the Huggingface ecosystem or Google Colab solutions.

photo
2

If you want save costs on the AI platform you can use my solution but intercept the API call. Just grab the prompt (log-folder) and feed it in your LLM subscription chat window. With small changes in the code you can later feed the result back in to the solution and generate the new map or glossary.
For maps it's simply a Mermaid-format import, and for the glossary it's a Markdown import.

photo
1

Another problem in my python approach, and it seems also in your script,

is that it seems you set "pre existed" prompt and apply that to the topic text.


in real life, quite many times the 1st reply from AI is not good,

i have to ask further/again to obtain the result i want.


this is also the reason i didn't subscribe to some " pdf to anki" AI service that turn a PDF into flashcards -- i think i need fine tune and using the fixed USD20 plans is more fit. many times you shot but not every time a shot will be on the target.

photo
1

My engineered prompts follow either zero-shot or one-shot strategy by now, so there is only one LLM request needed for the whole result (except "complex" actions like "refine,refine,cluster").

My future goal is the implementation of agentic actions using multi-shot strategies to overcome the 4000 token limit for each call.

LLM costs depend on the called actions and the "max_token" config-parameter.

If you want to know details about the implementation you can just look at my code or ask me. Discussing the costs of LLMs is not on my agenda. In regards of privacy I'm using Ollama and local models.

---