有些急性子

有些急性子

有些急性子
jike

The computer configuration is too poor, how to experience the charm of "local large models" for free.

Background#

==Free use of 10,000 minutes of GPU each month: Build large models in the cloud using Tencent Cloud Studio.==

Setting Up AI Space#

Open https://cloud.tencent.com/ and log in as prompted, then select the required model based on your situation. Next, take olloma as an example, check olloma as shown in the image, and then create a basic space.

image

image

Entering the IDE Environment#

Check the currently installed local large models through the terminal. Use the following command:

ollama list

The default installed model is: llama3:latest

Installing Required Local Large Models#

Log in to olloma official website, select the required large model, taking deepseek-r1:32b as an example, enter ollama pull deepseek-r1:32b in the IDE terminal, and wait for the model to download successfully.

image

image

Creating a Python Program to Start the Large Model Experience Journey#

Taking the following Python program as an example,

from ollama import chat
from ollama import ChatResponse

response: ChatResponse = chat(
    model='deepseek-r1:32b',
    messages=[
        {'role': 'user', 'content': 'Who are you?'},
    ]
)

print(response['message']['content'])

The terminal output is as follows

image

Finally#

Testing found that running the 32b model with 16GB of VRAM is still somewhat difficult, so let's try downloading a 14b model instead...

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.