Running and comparing locally Deepseek, Qwen, Phi

Running and comparing locally Deepseek, Qwen, Phi

Running and comparing locally Deepseek, Qwen, Phi

Running LLM locally has a bunch of benefits: faster, more private, more freedom, and if you hardware allows you can run much, much bigger queries. I found it to be quite easy through LM Studio. That allowed me to test and compare several Open Source models, including DeepSeek, Qwen and Phi.

I ran a quick comparison and here are my thoughts! You can also see a small installation guide in the second half of the article. Caveat: this whole experiment is very subjective, but overall it met my initial goals which is to run locally a LLM with reasonable quality and accuracy. I am running the following models that were recommended by LM Studio:

My thoughts on DeepSeek, Qwen, and Phi

Accuracy: The Edge of Phi and QWen

When evaluating the performance of these models, one aspect that consistently stood out was accuracy. Phi appears to deliver more precise outputs compared to its counterparts. This observation is subjective and may vary depending on individual use cases or expectations. However, in my interactions, Qwen responses seemed better aligned with the context and nuances of the input questions.

Speed: A Close Contest

In terms of response speed, DeepSeek and Qwen feel slightly faster than Phi, although it's challenging to definitively assess this on my hardware setup. The performance difference is minimal and could be influenced by various factors such as internet latency or server load rather than the models themselves.

Handling Files: A Common Challenge

One particular task I attempted with all three models was interpreting a PDF file—my credit card statement—to determine the number of transactions. Unfortunately, each model struggled to accurately read and analyze the attached file. This limitation highlights a common challenge that LLMs face when dealing with non-textual data or complex document formats without preprocessing.

Conclusion

While Phi seems to have an edge in accuracy for text-based queries, DeepSeek and Qwen offer competitive response times, albeit slightly. However, all three models currently struggle with handling certain types of files accurately. As these technologies evolve, it will be interesting to see how they address such limitations while continuing to improve their core functionalities.

In conclusion, the choice between these models may come down to specific needs: prioritize accuracy and context understanding with Phi or opt for slightly quicker responses with DeepSeek or Qwen—keeping in mind their current shared challenges.

Setup guide - Prerequisites

Before you begin the installation process, ensure that your computer meets the following requirements:

  • Operating System: Windows 10 or later, macOS Catalina or later, or a Linux distribution (Ubuntu recommended).
  • RAM: At least 16GB of RAM is recommended for smooth operation.
  • Storage Space: Ensure sufficient disk space; models can take up several gigabytes.
  • Python Version: Python 3.7 or higher should be installed if you plan to run custom scripts with the models.
  • GPU: it's better to have a nice GPU to run things faster. My computer is running on an RTX4090, which is completely overkill.

Installation Guide

Download and Install LM Studio

  1. Visit the official LM Studio website and navigate to the download section.

Download and Install Language Models

With LM Studio installed, you can now download and install specific language models.

  1. Open LM Studio: Launch the application from your Start menu (Windows), Applications folder (macOS), or terminal (Linux).
  2. Navigate to the Models Section: In the LM Studio interface, find the "Models" tab.
  3. Search for Your Model:
    • To download Phi, search for "Phi" in the model repository.
    • For Qwen, enter "Qwen" in the search bar.
    • If you're interested in DeepSeek models, simply type "DeepSeek."
  4. Select and Download: Once you find your desired model, click on it to view its details. Then press the "Download" button.
  5. Install the Model: After downloading, select the model from the list of downloaded files and choose the "Install" option.

Run Your Language Model

  1. Open the Installed Models Section: Go back to the main interface in LM Studio and navigate to the section displaying installed models.
  2. Choose a Model: Select Phi, Qwen, or DeepSeek from your list of installed models.
  3. Tweak your settings: you might want to add more tokens if you want to run bigger queries.
  4. Interact with the Model: Use the built-in text box to input queries or commands for the model to process. You can also use API endpoints if you are integrating with other applications.

Random Number: 0.9308533413681273