中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁(yè) > news >正文

建設(shè)網(wǎng)站需要什么技術(shù)支持360優(yōu)化大師官方下載

建設(shè)網(wǎng)站需要什么技術(shù)支持,360優(yōu)化大師官方下載,網(wǎng)站開發(fā)最合適的搭配,域名備案關(guān)閉網(wǎng)站嗎編輯:OAK中國(guó) 首發(fā):oakchina.cn 喜歡的話,請(qǐng)多多👍??? 內(nèi)容可能會(huì)不定期更新,官網(wǎng)內(nèi)容都是最新的,請(qǐng)查看首發(fā)地址鏈接。 ▌前言 Hello,大家好,這里是OAK中國(guó),我是助手…

編輯:OAK中國(guó)
首發(fā):oakchina.cn
喜歡的話,請(qǐng)多多👍???
內(nèi)容可能會(huì)不定期更新,官網(wǎng)內(nèi)容都是最新的,請(qǐng)查看首發(fā)地址鏈接。

▌前言

Hello,大家好,這里是OAK中國(guó),我是助手君。

最近咱社群里有幾個(gè)朋友在將yolox轉(zhuǎn)換成blob的過(guò)程有點(diǎn)不清楚,所以我就寫了這篇博客。(請(qǐng)夸我貼心!咱的原則:合理要求,有求必應(yīng)!)

1.其他Yolo轉(zhuǎn)換及使用教程請(qǐng)參考
2.檢測(cè)類的yolo模型建議使用在線轉(zhuǎn)換(地址),如果在線轉(zhuǎn)換不成功,你再根據(jù)本教程來(lái)做本地轉(zhuǎn)換。

.pt 轉(zhuǎn)換為 .onnx

使用下列腳本(將腳本放到 YOLOv8 根目錄中)將 pytorch 模型轉(zhuǎn)換為 onnx 模型,若已安裝 openvino_dev,則可進(jìn)一步轉(zhuǎn)換為 OpenVINO 模型:

示例用法:

python export_onnx.py -w <path_to_model>.pt -imgsz 640 

export_onnx.py :

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import argparse
import json
import math
import subprocess
import sys
import time
import warnings
from pathlib import Pathimport onnx
import torch
import torch.nn as nnwarnings.filterwarnings("ignore")ROOT = Path.cwd()
if str(ROOT) not in sys.path:sys.path.append(str(ROOT))from ultralytics.nn.modules import Detect
from ultralytics.nn.tasks import attempt_load_weights
from ultralytics.yolo.utils import LOGGERclass DetectV8(nn.Module):"""YOLOv8 Detect head for detection models"""dynamic = False  # force grid reconstructionexport = False  # export modeshape = Noneanchors = torch.empty(0)  # initstrides = torch.empty(0)  # initdef __init__(self, old_detect):super().__init__()self.nc = old_detect.nc  # number of classesself.nl = old_detect.nl  # number of detection layersself.reg_max = (old_detect.reg_max)  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)self.no = old_detect.no  # number of outputs per anchorself.stride = old_detect.stride  # strides computed during buildself.cv2 = old_detect.cv2self.cv3 = old_detect.cv3self.dfl = old_detect.dflself.f = old_detect.fself.i = old_detect.idef forward(self, x):shape = x[0].shape  # BCHWfor i in range(self.nl):x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).split((self.reg_max * 4, self.nc), 1)box = self.dfl(box)cls_output = cls.sigmoid()# Get the maxconf, _ = cls_output.max(1, keepdim=True)# Concaty = torch.cat([box, conf, cls_output], axis=1)# Split to 3 channelsoutputs = []start, end = 0, 0for i, xi in enumerate(x):end += xi.shape[-2] * xi.shape[-1]outputs.append(y[:, :, start:end].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))start += xi.shape[-2] * xi.shape[-1]return outputsdef bias_init(self):# Initialize Detect() biases, WARNING: requires stride availabilitym = self  # self.model[-1]  # Detect() modulefor a, b, s in zip(m.cv2, m.cv3, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)if __name__ == "__main__":parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)parser.add_argument("-w", "--weights", type=Path, default="./yolov8s.pt", help="weights path")parser.add_argument("-imgsz","--img-size",nargs="+",type=int,default=[640, 640],help="image size",)  # height, widthparser.add_argument("--opset", type=int, default=12, help="opset version")args = parser.parse_args()args.img_size *= 2 if len(args.img_size) == 1 else 1  # expandLOGGER.info(args)t = time.time()# Check devicedevice = torch.device("cpu")# Load PyTorch modelmodel = attempt_load_weights(str(args.weights), device=device, inplace=True, fuse=True)  # load FP32 modellabels = (model.module.names if hasattr(model, "module") else model.names)  # get class nameslabels = labels if isinstance(labels, list) else list(labels.values())# check num classes and labelsassert model.nc == len(labels), f"Model class count {model.nc} != len(names) {len(labels)}"# Replace with the custom Detection Headif isinstance(model.model[-1], (Detect)):model.model[-1] = DetectV8(model.model[-1])num_branches = model.model[-1].nl# Inputimg = torch.zeros(1, 3, *args.img_size).to(device)  # image size(1,3,320,192) iDetection# Update modelmodel.eval()# ONNX exporttry:LOGGER.info("\nStarting to export ONNX...")output_list = [f"output{i+1}_yolov6r2" for i in range(num_branches)]export_file = args.weights.with_suffix(".onnx")  # filenametorch.onnx.export(model,img,export_file,verbose=False,opset_version=args.opset,training=torch.onnx.TrainingMode.EVAL,do_constant_folding=True,input_names=["images"],output_names=output_list,dynamic_axes=None,)# Checksonnx_model = onnx.load(export_file)  # load onnx modelonnx.checker.check_model(onnx_model)  # check onnx modeltry:import onnxsimLOGGER.info("\nStarting to simplify ONNX...")onnx_model, check = onnxsim.simplify(onnx_model)assert check, "assert check failed"except Exception as e:LOGGER.warning(f"Simplifier failure: {e}")LOGGER.info(f"ONNX export success, saved as {export_file}")except Exception as e:LOGGER.error(f"ONNX export failure: {e}")export_json = export_file.with_suffix(".json")export_json.with_suffix(".json").write_text(json.dumps({"anchors": [],"anchor_masks": {},"coordinates": 4,"labels": labels,"num_classes": model.nc,},indent=4,))LOGGER.info("Labels data export success, saved as %s" % export_json)# OpenVINO exportprint("\nStarting to export OpenVINO...")export_dir = Path(str(export_file).replace(".onnx", "_openvino"))OpenVINO_cmd = ("mo --input_model %s --output_dir %s --data_type FP16 --scale 255 --reverse_input_channel --output '%s' "% (export_file, export_dir, ",".join(output_list)))try:subprocess.check_output(OpenVINO_cmd, shell=True)LOGGER.info(f"OpenVINO export success, saved as {export_dir}")except Exception as e:LOGGER.warning(f"OpenVINO export failure: {e}")LOGGER.info("\nBy the way, you can try to export OpenVINO use:")LOGGER.info("\n%s" % OpenVINO_cmd)# OAK Blob exportLOGGER.info("\nThen you can try to export blob use:")export_xml = export_dir / export_file.with_suffix(".xml")export_blob = export_dir / export_file.with_suffix(".blob")blob_cmd = ("compile_tool -m %s -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6 -o %s"% (export_xml, export_blob))LOGGER.info("\n%s" % blob_cmd)# FinishLOGGER.info("\nExport complete (%.2fs)" % (time.time() - t))

可以使用 Netron 查看模型結(jié)構(gòu):

在這里插入圖片描述

▌轉(zhuǎn)換

openvino 本地轉(zhuǎn)換

onnx -> openvino

mo 是 openvino_dev 2022.1 中腳本,

安裝命令為 pip install openvino-dev

mo --input_model yolov8n.onnx --scale 255 --reverse_input_channel

openvino -> blob

<path>/compile_tool -m yolov8n.xml \
-ip U8 -d MYRIAD \
-VPU_NUMBER_OF_SHAVES 6 \
-VPU_NUMBER_OF_CMX_SLICES 6

在線轉(zhuǎn)換

blobconvert 網(wǎng)頁(yè) http://blobconverter.luxonis.com/

  • 進(jìn)入網(wǎng)頁(yè),按下圖指示操作:

在這里插入圖片描述

  • 修改參數(shù),轉(zhuǎn)換模型:

在這里插入圖片描述

  1. 選擇 onnx 模型
  2. 修改 optimizer_params--data_type=FP16 --scale 255 --reverse_input_channel
  3. 修改 shaves6
  4. 轉(zhuǎn)換

blobconverter python 代碼

blobconverter.from_onnx("yolov8n.onnx",	optimizer_params=["--scale 255","--reverse_input_channel",],shaves=6,)

blobconvert cli

blobconverter --onnx yolov8n.onnx -sh 6 -o . --optimizer-params "scale=255 --reverse_input_channel"

▌DepthAI 示例

正確解碼需要可配置的網(wǎng)絡(luò)相關(guān)參數(shù):

  • setNumClasses - YOLO 檢測(cè)類別的數(shù)量
  • setIouThreshold - iou 閾值
  • setConfidenceThreshold - 置信度閾值,低于該閾值的對(duì)象將被過(guò)濾掉
import cv2
import depthai as dai
import numpy as npmodel = dai.OpenVINO.Blob("yolov8n.blob")
dim = model.networkInputs.get("images").dims
W, H = dim[:2]
labelMap = [# "class_1","class_2","...""class_%s"%i for i in range(80)
]# Create pipeline
pipeline = dai.Pipeline()# Define sources and outputs
camRgb = pipeline.create(dai.node.ColorCamera)
detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
xoutRgb = pipeline.create(dai.node.XLinkOut)
nnOut = pipeline.create(dai.node.XLinkOut)xoutRgb.setStreamName("rgb")
nnOut.setStreamName("nn")# Properties
camRgb.setPreviewSize(W, H)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
camRgb.setFps(40)# Network specific settings
detectionNetwork.setBlob(model)
detectionNetwork.setConfidenceThreshold(0.5)
detectionNetwork.setNumClasses(80)
detectionNetwork.setCoordinateSize(4)
detectionNetwork.setAnchors([])
detectionNetwork.setAnchorMasks({})
detectionNetwork.setIouThreshold(0.5)# Linking
camRgb.preview.link(detectionNetwork.input)
camRgb.preview.link(xoutRgb.input)
detectionNetwork.out.link(nnOut.input)# Connect to device and start pipeline
with dai.Device(pipeline) as device:# Output queues will be used to get the rgb frames and nn data from the outputs defined aboveqRgb = device.getOutputQueue(name="rgb", maxSize=4, blocking=False)qDet = device.getOutputQueue(name="nn", maxSize=4, blocking=False)frame = Nonedetections = []color2 = (255, 255, 255)# nn data, being the bounding box locations, are in <0..1> range - they need to be normalized with frame width/heightdef frameNorm(frame, bbox):normVals = np.full(len(bbox), frame.shape[0])normVals[::2] = frame.shape[1]return (np.clip(np.array(bbox), 0, 1) * normVals).astype(int)def displayFrame(name, frame):color = (255, 0, 0)for detection in detections:bbox = frameNorm(frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax))cv2.putText(frame, labelMap[detection.label], (bbox[0] + 10, bbox[1] + 20), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)cv2.putText(frame, f"{int(detection.confidence * 100)}%", (bbox[0] + 10, bbox[1] + 40), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)cv2.rectangle(frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), color, 2)# Show the framecv2.imshow(name, frame)while True:inRgb = qRgb.tryGet()inDet = qDet.tryGet()if inRgb is not None:frame = inRgb.getCvFrame()if inDet is not None:detections = inDet.detectionsif frame is not None:displayFrame("rgb", frame)if cv2.waitKey(1) == ord('q'):break

▌參考資料

https://www.oakchina.cn/2023/02/24/yolov8-blob/
https://docs.oakchina.cn/en/latest/
https://www.oakchina.cn/selection-guide/


OAK中國(guó)
| OpenCV AI Kit在中國(guó)區(qū)的官方代理商和技術(shù)服務(wù)商
| 追蹤AI技術(shù)和產(chǎn)品新動(dòng)態(tài)

戳「+關(guān)注」獲取最新資訊↗↗

http://m.risenshineclean.com/news/63725.html

相關(guān)文章:

  • 對(duì)運(yùn)營(yíng)網(wǎng)站有什么見解百度推廣年費(fèi)多少錢
  • 做物流的在什么網(wǎng)站找客戶呢推廣互聯(lián)網(wǎng)推廣
  • 金昌市建設(shè)局網(wǎng)站朝陽(yáng)網(wǎng)絡(luò)推廣
  • 企業(yè)網(wǎng)站案例企業(yè)網(wǎng)絡(luò)推廣的方式有哪些
  • wordpress生成靜態(tài)頁(yè)滎陽(yáng)網(wǎng)站優(yōu)化公司
  • 網(wǎng)站開發(fā)要用到的工具鄭州網(wǎng)絡(luò)營(yíng)銷公司哪家好
  • 東營(yíng)網(wǎng)站建設(shè)公司廣告聯(lián)盟論壇
  • 不知道我自己的網(wǎng)站的ftp賬號(hào)百度點(diǎn)擊快速排名
  • 上弘科技網(wǎng)站建設(shè)百度新聞搜索
  • 河南建設(shè)銀行官方網(wǎng)站網(wǎng)址域名大全
  • 茌平網(wǎng)站建設(shè)費(fèi)用網(wǎng)店運(yùn)營(yíng)公司
  • 漳州市住房建設(shè)局網(wǎng)站營(yíng)銷型網(wǎng)站建設(shè)策劃書
  • 網(wǎng)站百度權(quán)重廣東疫情最新消息
  • 蘇州做網(wǎng)站公司電話十大免費(fèi)引流平臺(tái)
  • 沈陽(yáng)市網(wǎng)站建設(shè)企業(yè)網(wǎng)站推廣和優(yōu)化系統(tǒng)
  • 邵陽(yáng)疫情最新消息情況南寧百度推廣seo
  • 縣政府網(wǎng)站建設(shè)報(bào)告如何宣傳推廣自己的店鋪
  • 青島網(wǎng)站設(shè)計(jì)電話引擎網(wǎng)站
  • 湖南地稅局官網(wǎng)站水利建設(shè)基金什么軟件可以發(fā)布推廣信息
  • 武漢網(wǎng)站建設(shè)S小蝌蚪互聯(lián)搜索引擎優(yōu)化名詞解釋
  • wordpress標(biāo)簽有問(wèn)題百中搜優(yōu)化
  • 天河做網(wǎng)站開發(fā)外包公司和勞務(wù)派遣
  • wordpress 登錄頁(yè)美化重慶seo公司怎么樣
  • 網(wǎng)站權(quán)重怎么提升可以搜索國(guó)外網(wǎng)站的搜索引擎
  • 商丘做網(wǎng)站公司新站seo快速收錄網(wǎng)頁(yè)內(nèi)容頁(yè)的方法如何制作公司網(wǎng)頁(yè)
  • 廣州網(wǎng)站建設(shè)公司興田德潤(rùn)怎么樣搜收錄網(wǎng)
  • 起名網(wǎng)站怎么做免費(fèi)做網(wǎng)站怎么做網(wǎng)站嗎
  • 網(wǎng)站上那些兼職網(wǎng)頁(yè)怎么做的搜索引擎優(yōu)化的辦法有哪些
  • 用asp做的網(wǎng)站有多少沈陽(yáng)cms建站模板
  • 順德企業(yè)手機(jī)網(wǎng)站建設(shè)網(wǎng)絡(luò)營(yíng)銷的方法有哪些?