基于Win10 + WSL2 + Ubuntu22.04的AI探索(一)
基于Win10 WSL2 Ubuntu22.04的AI探索架构图在WSL2安装多个Ubuntu子系统安装CUDAcuDNNNCCLtorch本地部署Ollama本地部署Llama.cpp本地部署OpenClaw本地部署CoPaw架构图在WSL2安装多个Ubuntu子系统意在利用子系统隔离不同的AI探索项目避免依赖冲突等问题1. 安装Ubuntu22.04wsl--install-dUbuntu-22.042. 初始环境sudovim/etc/wsl.conf[network]hostname 新主机名 generateHosts false generateResolvConf false[user]default rootsudovi/etc/hosts127.0.1.1 新主机名.localdomain 新主机名sudovi/etc/resolv.confnameserver 8.8.8.8 nameserver 8.8.4.4sudovi/etc/systemd/resolved.conf[Resolve]DNS8.8.8.8sudosystemctl restart systemd-resolvedsudosystemctl restart NetworkManager3. 更新系统安装依赖包sudoaptupdatesudoaptupgrade-ysudoaptinstall-ynet-tools network-manager zstd build-essentialsudoaptinstall-ycmake libcurl4-openssl-dev checkinstallgitcurlunzipsudoln-fs/bin/bash /bin/sh4. 更新cmake 3.28wgethttps://github.com/Kitware/CMake/releases/download/v3.28.3/cmake-3.28.3-linux-x86_64.shchmodx cmake-3.28.3-linux-x86_64.shsudo./cmake-3.28.3-linux-x86_64.sh --skip-license--prefix/usr/local# 备份旧版本的 cmake 链接可选但建议做sudomv/usr/bin/cmake /usr/bin/cmake.old# 创建新版本的软链接指向 /usr/local/bin/cmakesudoln-s/usr/local/bin/cmake /usr/bin/cmake# 同理更新 cpack、ctest 等相关工具避免后续报错sudomv/usr/bin/cpack /usr/bin/cpack.oldsudoln-s/usr/local/bin/cpack /usr/bin/cpacksudomv/usr/bin/ctest /usr/bin/ctest.oldsudoln-s/usr/local/bin/ctest /usr/bin/ctest cmake--version5. 导出为基础版本#wsl --export [子系统名] 导出目标路径wsl--exportUbuntu-22.04 E:\WSL\Ubuntu-22.04.tar6. 利用WSL导入功能创建新的子系统wsl --import [子系统名] 子系统路径 导入来源路径wsl--importUbuntu-22.04-llamacppE:\WSL\Ubuntu-22.04-llamacppE:\WSL\Ubuntu-22.04.tar7. 与宿主机的网络端口映射netsh interface portproxyaddv4tov4listenport[宿主机监听端口]listenaddress0.0.0.0connectport[子系统端口]connectaddress[子系统IP]8. 安装新版本的Node.js和npm#如果之前通过 apt 安装过旧版本 Node.js建议先卸载避免冲突sudoaptremove-ynodejsnpmsudoaptautoremove-y# 下载并安装 nvm使用官方脚本版本号可能更新以官网为准curl-o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh|bashsource~/.bashrc# 输出版本号即成功nvm--version# 安装 Node.js 20.x LTS会自动安装对应版本的 npmnvminstall20# 设置 20.x 为默认版本避免重启终端后版本切换nvmaliasdefault20# 应输出 v20.x.x如 v20.17.0node-v# 应输出对应的 npm 版本如 10.8.2npm-v#安装pnpmnpminstall-gpnpmpnpm--version9. 安装uvcurl-LsSfhttps://astral.sh/uv/install.sh|shechoexport PATH$HOME/.local/bin:$PATH~/.bashrcsource~/.bashrc安装CUDAcuDNNNCCLtorch1. 安装CUDA Toolkit查询CUDA版本nvidia-smi在 https://developer.nvidia.com/cuda-downloads 根据操作系统及CUDA版本下载并执行对应的run file#13.2wgethttps://developer.download.nvidia.com/compute/cuda/13.2.0/local_installers/cuda_13.2.0_595.45.04_linux.runsudoshcuda_13.2.0_595.45.04_linux.run#12.8wgethttps://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda_12.8.0_570.86.10_linux.runsudoshcuda_12.8.0_570.86.10_linux.run#12.9wgethttps://developer.download.nvidia.com/compute/cuda/12.9.0/local_installers/cuda_12.9.0_575.51.03_linux.runsudoshcuda_12.9.0_575.51.03_linux.run2. 配置环境变量#13.2echoexport CUDA_HOME/usr/local/cuda-13.2~/.bashrc#12.8echoexport CUDA_HOME/usr/local/cuda-12.8~/.bashrc#12.9echoexport CUDA_HOME/usr/local/cuda-12.9~/.bashrcechoexport PATH$PATH:${CUDA_HOME}/bin~/.bashrcechoexport LD_LIBRARY_PATH$LD_LIBRARY_PATH:${LD_LIBRARY_PATH}/lib64~/.bashrcechoexport PATH$PATH:/home/ubuntu/.local/bin~/.bashrcsource~/.bashrc3. 检查nvcc版本nvcc--version4. 安装cuDNN在 [https://developer.nvidia.com/rdp/cudnn-archive) 根据操作系统及CUDA版本下载并安装cuDNN#解压缩tar-xvfcudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz#复制到cuda目录#13.2sudocpcudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-13.2/includesudocp-Pcudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-13.2/lib64#12.8sudocpcudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-12.8/includesudocp-Pcudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.8/lib64#12.9sudocpcudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-12.9/includesudocp-Pcudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.9/lib64#修改文件权限#13.2sudochmodar /usr/local/cuda-13.2/include/cudnn*.h /usr/local/cuda-13.2/lib64/libcudnn*#12.8sudochmodar /usr/local/cuda-12.8/include/cudnn*.h /usr/local/cuda-12.8/lib64/libcudnn*#12.8sudochmodar /usr/local/cuda-12.9/include/cudnn*.h /usr/local/cuda-12.9/lib64/libcudnn*#显示版本表示安装成功cat/usr/local/cuda/include/cudnn_version.h|grepCUDNN_MAJOR-A25. 安装NCCL在 [https://developer.nvidia.com/nccl/nccl-download/) 根据操CUDA版本下载并安装NCCLwgethttps://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.debsudodpkg-icuda-keyring_1.1-1_all.debsudoaptupdate#13.2sudoaptinstalllibnccl22.30.3-1cuda13.2 libnccl-dev2.30.3-1cuda13.2#12.8sudoaptinstalllibnccl22.26.2-1cuda12.8 libnccl-dev2.26.2-1cuda12.8#12.9sudoaptinstalllibnccl22.30.3-1cuda12.9 libnccl-dev2.30.3-1cuda12.96. 安装torch#13.2pipinstalltorch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130#12.8pipinstalltorch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128#12.9pipinstalltorch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu1297. 验证脚本importtorchimportplatformdefget_system_info():return{系统:platform.system(),Python 版本:platform.python_version(),PyTorch 版本:torch.__version__,CUDA 可用:torch.cuda.is_available(),CUDA 版本:torch.version.cuda,MPS 可用:hasattr(torch.backends,mps)andtorch.backends.mps.is_available(),显卡信息:torch.cuda.get_device_name(0)iftorch.cuda.is_available()else无}deftest_mps():ifnottorch.backends.mps.is_available():ifnottorch.backends.mps.is_built():print(MPS 不可用因为当前的 PyTorch 安装未启用 MPS。)else:print(MPS 不可用因为当前的 MacOS 版本不是 12.3或者此机器上没有启用 MPS 的设备。)else:mps_devicetorch.device(mps)# 在 mps 设备上创建一个张量xtorch.ones(5,devicemps_device)# 或者xtorch.ones(5,devicemps)# 任何操作都在 GPU 上进行yx*2# 将模型移动到 mps 设备modelYourFavoriteNet()model.to(mps_device)# 现在每次调用都在 GPU 上运行predmodel(x)if__name____main__:infoget_system_info()fork,vininfo.items():print(f{k}:{v})test_mps()本地部署Ollama1. 安装ollamacurl-fsSLhttps://ollama.com/install.sh|sh2. 配置服务sudovi/etc/systemd/system/ollama.service# 文件内容[Unit]DescriptionOllama Service Afternetwork-online.target[Service]ExecStart/usr/bin/ollama serve UserollamaGroupollama Restartalways RestartSec3 EnvironmentPATH$PATHEnvironmentOLLAMA_HOST0.0.0.0EnvironmentOLLAMA_ORIGINS*[Install]WantedBydefault.target# 重载配置sudosystemctl daemon-reload# 启动服务sudosystemctl start ollama.service# 查看服务状态sudosystemctl status ollama.service# 设置服务开机自启动sudosystemctlenableollama.service3. 下载模型ollama pull qwen3.5:35b4. 安装Ngnix#安装 Nginxsudoaptinstallnginx-y#启动 Nginx 并设置开机自启sudosystemctl start nginxsudosystemctlenablenginx#验证 Nginx 是否运行如果 Nginx 正常运行你应该看到 active (running) 状态。sudosystemctl status nginx5. 配置 API Key 验证sudotee/etc/nginx/conf.d/ollama.confEOF server { listen 9180; location / { if ($http_authorization ! [API KEY]) { return 403; } proxy_pass http://localhost:11434; } } EOF6. 设置宿主机端口映射netsh interface portproxyaddv4tov4listenport9180listenaddress0.0.0.0connectport9180connectaddress172.20.149.74本地部署Llama.cpp1. 安装llamp.cpp# 克隆仓库cd/usr/localgitclone https://github.com/ggerganov/llama.cpp.gitcdllama.cpp# 使用 CMake 编译推荐方式mkdirbuildcdbuild# 编译时启用 CUDAcmake..-DLLAMA_CUDAON cmake--build.--configRelease -j$(nproc)exportPATH$PATH:/usr/local/llama.cpp/build/binsource~/.bashrc2. 安装modelscopesudoaptinstallpython3-pip pipinstallmodelscope-ihttps://pypi.tuna.tsinghua.edu.cn/simpleexportPATH$PATH:/home/ubuntu/.local/binsource~/.bashrc3. 从modescope下载模型在https://www.modelscope.cn/models下载合适的模型modelscope download --model [模型合集] [模型] --local_dir [下载路径]modelscope download--modelQwen/Qwen3.5-27B-FP8 README.md--local_dir/usr/local/llama.cpp/build/models4. 运行模型# 回到 build 目录cd/usr/local/llama.cpp/build/bin# 基础运行方式./llama-cli\-m~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf\-p你好请介绍一下自己\-n512# 交互式聊天模式./llama-cli\-m~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf\--chat-template llama3\-cnv# 启动 HTTP API 服务器./llama-server\-m~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf\--host0.0.0.0\--port8080\-c40965. 配置服务sudovi/etc/systemd/system/llama-server.service设置API端口为9191[Unit]Descriptionllama.cppHTTP Server Afternetwork.target[Service]Typesimple UserllamaGroupllama WorkingDirectory/usr/local/llama.cppExecStart/usr/local/llama.cpp/build/bin/llama-server \-m/usr/local/llama.cpp/build/models/modei_file_name.guff \--port 9191 \--host 0.0.0.0 \-c 163840 \-np 4 \--threads 12 \--n-gpu-layers 35 \--cont-batching \-ngl 99999 \-b 4096 Restartalways RestartSec5 EnvironmentLD_LIBRARY_PATH/usr/local/cuda/lib64 LimitNOFILE65536[Install]WantedBymulti-user.target6. 安装Ngnix#安装 Nginxsudoaptinstallnginx-y#启动 Nginx 并设置开机自启sudosystemctl start nginxsudosystemctlenablenginx#验证 Nginx 是否运行如果 Nginx 正常运行你应该看到 active (running) 状态。sudosystemctl status nginx7. 配置 API Key 验证sudotee/etc/nginx/conf.d/llamacpp.confEOF server { listen 9280; location / { if ($http_authorization ! [API KEY]) { return 403; } proxy_pass http://localhost:9191; } } EOF8. 设置宿主机端口映射netsh interface portproxyaddv4tov4listenport9280listenaddress0.0.0.0connectport9280connectaddress172.20.149.74本地部署OpenClaw本地部署CoPaw1. 安装CoPawcurl-fsSLhttps://copaw.agentscope.io/install.sh|bash2. 初始化CoPaw/home/ubuntu/.local/bin/copaw init--defaults/home/ubuntu/.local/bin/copaw app3. 启动CoPaw/home/ubuntu/.local/bin/copaw app4. 打开控制台http://127.0.0.1:80885. 创建servicesudotee/etc/systemd/system/copaw.serviceEOF [Unit] DescriptionCoPaw Inference Service Afternetwork.target [Service] Typesimple Userubuntu Groupubuntu Restartalways WorkingDirectory/home/ubuntu/.copaw ExecStart/home/ubuntu/.local/bin/copaw app ExecStop/home/ubuntu/.local/bin/copaw shutdown Restartalways RestartSec5 [Install] WantedBymulti-user.target EOFsudosystemctl start copaw.servicesudosystemctlenablecopaw.service6. 安装Ngnix#安装 Nginxsudoaptinstallnginx-y#启动 Nginx 并设置开机自启sudosystemctl start nginxsudosystemctlenablenginx#验证 Nginx 是否运行如果 Nginx 正常运行你应该看到 active (running) 状态。sudosystemctl status nginx7. 配置代理CoPaw默认绑定127.0.0.1地址修改配置文件后还是被覆盖所以改用NG做代理sudotee/etc/nginx/conf.d/copaw.confEOF server { listen 18088; location / { proxy_pass http://localhost:8088; } } EOF8. 设置宿主机端口映射netsh interface portproxyaddv4tov4listenport18088listenaddress0.0.0.0connectport18088connectaddress172.20.149.74