昇思25天学习打卡营第14天|CycleGAN图像风格迁移互换

模型介绍

模型简介

CycleGAN(Cycle Generative Adversarial Network) 即循环对抗生成网络,该模型实现了一种在没有配对示例的情况下学习将图像从源域 X 转换到目标域 Y 的方法。

该模型一个重要应用领域是域迁移,它只需要两种域的数据,而不需要他们有严格对应关系,是一种新的无监督的图像迁移网络。

模型结构

CycleGAN 网络本质上是由两个镜像对称的 GAN 网络组成,其结构如下图所示:

CycleGAN

该模型一个很重要的部分就是损失函数,在所有损失里面循环一致损失(Cycle Consistency Loss)是最重要的。循环损失的计算过程如下图所示:

Cycle Consistency Loss

数据集

使用ImageNet数据集,选取苹果和橘子部分。

数据集下载

from download import download

url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/application/CycleGAN_apple2orange.zip"

download(url, ".", kind="zip", replace=True)

数据集加载

from mindspore.dataset import MindDataset

# 读取MindRecord格式数据
name_mr = "./CycleGAN_apple2orange/apple2orange_train.mindrecord"
data = MindDataset(dataset_files=name_mr)
print("Datasize: ", data.get_dataset_size())

batch_size = 1
dataset = data.batch(batch_size)
datasize = dataset.get_dataset_size()

构建生成器

采用ResNet模型结构,对于128×128大小的输入图片采用6个残差块相连,图片大小为256×256以上的需要采用9个残差块相连,所以本文网络有9个残差块相连。

生成器结构如下:

CycleGAN Generator

模型结构代码实现:

import mindspore.nn as nn
import mindspore.ops as ops
from mindspore.common.initializer import Normal

weight_init = Normal(sigma=0.02)

class ConvNormReLU(nn.Cell):
    def __init__(self, input_channel, out_planes, kernel_size=4, stride=2, alpha=0.2, norm_mode='instance',
                 pad_mode='CONSTANT', use_relu=True, padding=None, transpose=False):
        super(ConvNormReLU, self).__init__()
        norm = nn.BatchNorm2d(out_planes)
        if norm_mode == 'instance':
            norm = nn.BatchNorm2d(out_planes, affine=False)
        has_bias = (norm_mode == 'instance')
        if padding is None:
            padding = (kernel_size - 1) // 2
        if pad_mode == 'CONSTANT':
            if transpose:
                conv = nn.Conv2dTranspose(input_channel, out_planes, kernel_size, stride, pad_mode='same',
                                          has_bias=has_bias, weight_init=weight_init)
            else:
                conv = nn.Conv2d(input_channel, out_planes, kernel_size, stride, pad_mode='pad',
                                 has_bias=has_bias, padding=padding, weight_init=weight_init)
            layers = [conv, norm]
        else:
            paddings = ((0, 0), (0, 0), (padding, padding), (padding, padding))
            pad = nn.Pad(paddings=paddings, mode=pad_mode)
            if transpose:
                conv = nn.Conv2dTranspose(input_channel, out_planes, kernel_size, stride, pad_mode='pad',
                                          has_bias=has_bias, weight_init=weight_init)
            else:
                conv = nn.Conv2d(input_channel, out_planes, kernel_size, stride, pad_mode='pad',
                                 has_bias=has_bias, weight_init=weight_init)
            layers = [pad, conv, norm]
        if use_relu:
            relu = nn.ReLU()
            if alpha > 0:
                relu = nn.LeakyReLU(alpha)
            layers.append(relu)
        self.features = nn.SequentialCell(layers)

    def construct(self, x):
        output = self.features(x)
        return output


class ResidualBlock(nn.Cell):
    def __init__(self, dim, norm_mode='instance', dropout=False, pad_mode="CONSTANT"):
        super(ResidualBlock, self).__init__()
        self.conv1 = ConvNormReLU(dim, dim, 3, 1, 0, norm_mode, pad_mode)
        self.conv2 = ConvNormReLU(dim, dim, 3, 1, 0, norm_mode, pad_mode, use_relu=False)
        self.dropout = dropout
        if dropout:
            self.dropout = nn.Dropout(p=0.5)

    def construct(self, x):
        out = self.conv1(x)
        if self.dropout:
            out = self.dropout(out)
        out = self.conv2(out)
        return x + out


class ResNetGenerator(nn.Cell):
    def __init__(self, input_channel=3, output_channel=64, n_layers=9, alpha=0.2, norm_mode='instance', dropout=False,
                 pad_mode="CONSTANT"):
        super(ResNetGenerator, self).__init__()
        self.conv_in = ConvNormReLU(input_channel, output_channel, 7, 1, alpha, norm_mode, pad_mode=pad_mode)
        self.down_1 = ConvNormReLU(output_channel, output_channel * 2, 3, 2, alpha, norm_mode)
        self.down_2 = ConvNormReLU(output_channel * 2, output_channel * 4, 3, 2, alpha, norm_mode)
        layers = [ResidualBlock(output_channel * 4, norm_mode, dropout=dropout, pad_mode=pad_mode)] * n_layers
        self.residuals = nn.SequentialCell(layers)
        self.up_2 = ConvNormReLU(output_channel * 4, output_channel * 2, 3, 2, alpha, norm_mode, transpose=True)
        self.up_1 = ConvNormReLU(output_channel * 2, output_channel, 3, 2, alpha, norm_mode, transpose=True)
        if pad_mode == "CONSTANT":
            self.conv_out = nn.Conv2d(output_channel, 3, kernel_size=7, stride=1, pad_mode='pad',
                                      padding=3, weight_init=weight_init)
        else:
            pad = nn.Pad(paddings=((0, 0), (0, 0), (3, 3), (3, 3)), mode=pad_mode)
            conv = nn.Conv2d(output_channel, 3, kernel_size=7, stride=1, pad_mode='pad', weight_init=weight_init)
            self.conv_out = nn.SequentialCell([pad, conv])

    def construct(self, x):
        x = self.conv_in(x)
        x = self.down_1(x)
        x = self.down_2(x)
        x = self.residuals(x)
        x = self.up_2(x)
        x = self.up_1(x)
        output = self.conv_out(x)
        return ops.tanh(output)

# 实例化生成器
net_rg_a = ResNetGenerator()
net_rg_a.update_parameters_name('net_rg_a.')

net_rg_b = ResNetGenerator()
net_rg_b.update_parameters_name('net_rg_b.')

构建判别器

判别器其实是一个二分类网络模型,输出判定该图像为真实图的概率。代码实现如下:

# 定义判别器
class Discriminator(nn.Cell):
    def __init__(self, input_channel=3, output_channel=64, n_layers=3, alpha=0.2, norm_mode='instance'):
        super(Discriminator, self).__init__()
        kernel_size = 4
        layers = [nn.Conv2d(input_channel, output_channel, kernel_size, 2, pad_mode='pad', padding=1, weight_init=weight_init),
                  nn.LeakyReLU(alpha)]
        nf_mult = output_channel
        for i in range(1, n_layers):
            nf_mult_prev = nf_mult
            nf_mult = min(2 ** i, 8) * output_channel
            layers.append(ConvNormReLU(nf_mult_prev, nf_mult, kernel_size, 2, alpha, norm_mode, padding=1))
        nf_mult_prev = nf_mult
        nf_mult = min(2 ** n_layers, 8) * output_channel
        layers.append(ConvNormReLU(nf_mult_prev, nf_mult, kernel_size, 1, alpha, norm_mode, padding=1))
        layers.append(nn.Conv2d(nf_mult, 1, kernel_size, 1, pad_mode='pad', padding=1, weight_init=weight_init))
        self.features = nn.SequentialCell(layers)

    def construct(self, x):
        output = self.features(x)
        return output

# 判别器初始化
net_d_a = Discriminator()
net_d_a.update_parameters_name('net_d_a.')

net_d_b = Discriminator()
net_d_b.update_parameters_name('net_d_b.')

优化器和损失函数

对于生成器G及其判别器DY,目标损失函数为:

L_{GAN}(G,D_{Y},X,Y)=E_{y-P_{data}(y)}[logD_Y(y)]+E_{x-P_{data}(x)}[log(1-D_Y(G(x)))]

生成器的目标是最小化这个损失函数以此来对抗判别器,即:

min_Gmax_{D_Y}L_{GAN}(G,D_Y,X,Y)

单独的对抗损失不能保证所学函数可以将单个输入映射到期望的输出,为了进一步减少可能的映射函数的空间,学习到的映射函数应该是周期一致的,即:

x\rightarrow G(x)\rightarrow F(G(x))\approx x

对于Y,类似的:

x\rightarrow G(x)\rightarrow F(G(x))\approx x

循环一致损失函数定义如下:

L_{cyc}(G,F)=E_{x-P_{data}(x)}[\left \| F(G(x))-x \right \|_1]+E_{y-P_{data}(y)}[\left \| G(F(y))-y \right \|_1]

循环一致损失能够保证重建图像F(G(x))与输入图像x紧密匹配。

代码实现:

# 构建生成器,判别器优化器
optimizer_rg_a = nn.Adam(net_rg_a.trainable_params(), learning_rate=0.0002, beta1=0.5)
optimizer_rg_b = nn.Adam(net_rg_b.trainable_params(), learning_rate=0.0002, beta1=0.5)

optimizer_d_a = nn.Adam(net_d_a.trainable_params(), learning_rate=0.0002, beta1=0.5)
optimizer_d_b = nn.Adam(net_d_b.trainable_params(), learning_rate=0.0002, beta1=0.5)

# GAN网络损失函数,这里最后一层不使用sigmoid函数
loss_fn = nn.MSELoss(reduction='mean')
l1_loss = nn.L1Loss("mean")

def gan_loss(predict, target):
    target = ops.ones_like(predict) * target
    loss = loss_fn(predict, target)
    return loss

前向计算

为了减小模型振荡,遵循Shrivastava等人的策略,使用生成器生成图像的历史数据而不是生成器生成的最新图像数据来更新鉴别器。

代码实现:

import mindspore as ms

# 前向计算

def generator(img_a, img_b):
    fake_a = net_rg_b(img_b)
    fake_b = net_rg_a(img_a)
    rec_a = net_rg_b(fake_b)
    rec_b = net_rg_a(fake_a)
    identity_a = net_rg_b(img_a)
    identity_b = net_rg_a(img_b)
    return fake_a, fake_b, rec_a, rec_b, identity_a, identity_b

lambda_a = 10.0
lambda_b = 10.0
lambda_idt = 0.5

def generator_forward(img_a, img_b):
    true = Tensor(True, dtype.bool_)
    fake_a, fake_b, rec_a, rec_b, identity_a, identity_b = generator(img_a, img_b)
    loss_g_a = gan_loss(net_d_b(fake_b), true)
    loss_g_b = gan_loss(net_d_a(fake_a), true)
    loss_c_a = l1_loss(rec_a, img_a) * lambda_a
    loss_c_b = l1_loss(rec_b, img_b) * lambda_b
    loss_idt_a = l1_loss(identity_a, img_a) * lambda_a * lambda_idt
    loss_idt_b = l1_loss(identity_b, img_b) * lambda_b * lambda_idt
    loss_g = loss_g_a + loss_g_b + loss_c_a + loss_c_b + loss_idt_a + loss_idt_b
    return fake_a, fake_b, loss_g, loss_g_a, loss_g_b, loss_c_a, loss_c_b, loss_idt_a, loss_idt_b

def generator_forward_grad(img_a, img_b):
    _, _, loss_g, _, _, _, _, _, _ = generator_forward(img_a, img_b)
    return loss_g

def discriminator_forward(img_a, img_b, fake_a, fake_b):
    false = Tensor(False, dtype.bool_)
    true = Tensor(True, dtype.bool_)
    d_fake_a = net_d_a(fake_a)
    d_img_a = net_d_a(img_a)
    d_fake_b = net_d_b(fake_b)
    d_img_b = net_d_b(img_b)
    loss_d_a = gan_loss(d_fake_a, false) + gan_loss(d_img_a, true)
    loss_d_b = gan_loss(d_fake_b, false) + gan_loss(d_img_b, true)
    loss_d = (loss_d_a + loss_d_b) * 0.5
    return loss_d

def discriminator_forward_a(img_a, fake_a):
    false = Tensor(False, dtype.bool_)
    true = Tensor(True, dtype.bool_)
    d_fake_a = net_d_a(fake_a)
    d_img_a = net_d_a(img_a)
    loss_d_a = gan_loss(d_fake_a, false) + gan_loss(d_img_a, true)
    return loss_d_a

def discriminator_forward_b(img_b, fake_b):
    false = Tensor(False, dtype.bool_)
    true = Tensor(True, dtype.bool_)
    d_fake_b = net_d_b(fake_b)
    d_img_b = net_d_b(img_b)
    loss_d_b = gan_loss(d_fake_b, false) + gan_loss(d_img_b, true)
    return loss_d_b

# 保留了一个图像缓冲区,用来存储之前创建的50个图像
pool_size = 50
def image_pool(images):
    num_imgs = 0
    image1 = []
    if isinstance(images, Tensor):
        images = images.asnumpy()
    return_images = []
    for image in images:
        if num_imgs < pool_size:
            num_imgs = num_imgs + 1
            image1.append(image)
            return_images.append(image)
        else:
            if random.uniform(0, 1) > 0.5:
                random_id = random.randint(0, pool_size - 1)

                tmp = image1[random_id].copy()
                image1[random_id] = image
                return_images.append(tmp)

            else:
                return_images.append(image)
    output = Tensor(return_images, ms.float32)
    if output.ndim != 4:
        raise ValueError("img should be 4d, but get shape {}".format(output.shape))
    return output

计算梯度和反向传播

from mindspore import value_and_grad

# 实例化求梯度的方法
grad_g_a = value_and_grad(generator_forward_grad, None, net_rg_a.trainable_params())
grad_g_b = value_and_grad(generator_forward_grad, None, net_rg_b.trainable_params())

grad_d_a = value_and_grad(discriminator_forward_a, None, net_d_a.trainable_params())
grad_d_b = value_and_grad(discriminator_forward_b, None, net_d_b.trainable_params())

# 计算生成器的梯度,反向传播更新参数
def train_step_g(img_a, img_b):
    net_d_a.set_grad(False)
    net_d_b.set_grad(False)

    fake_a, fake_b, lg, lga, lgb, lca, lcb, lia, lib = generator_forward(img_a, img_b)

    _, grads_g_a = grad_g_a(img_a, img_b)
    _, grads_g_b = grad_g_b(img_a, img_b)
    optimizer_rg_a(grads_g_a)
    optimizer_rg_b(grads_g_b)

    return fake_a, fake_b, lg, lga, lgb, lca, lcb, lia, lib

# 计算判别器的梯度,反向传播更新参数
def train_step_d(img_a, img_b, fake_a, fake_b):
    net_d_a.set_grad(True)
    net_d_b.set_grad(True)

    loss_d_a, grads_d_a = grad_d_a(img_a, fake_a)
    loss_d_b, grads_d_b = grad_d_b(img_b, fake_b)

    loss_d = (loss_d_a + loss_d_b) * 0.5

    optimizer_d_a(grads_d_a)
    optimizer_d_b(grads_d_b)

    return loss_d

模型训练

训练分为两个主要部分:训练判别器和训练生成器,在前文的判别器损失函数中,论文采用了最小二乘损失代替负对数似然目标。

训练判别器:训练判别器的目的是最大程度地提高判别图像真伪的概率。最小化

E_{y-P_{data}(y)}[(D(y)-1)^2]

训练生成器:最小化

E_{x-P_{data}(x)}[(G(x)-1)^2]

训练过程代码如下:

import os
import time
import random
import numpy as np
from PIL import Image
from mindspore import Tensor, save_checkpoint
from mindspore import dtype

# 由于时间原因,epochs设置为1,可根据需求进行调整
epochs = 1
save_step_num = 80
save_checkpoint_epochs = 1
save_ckpt_dir = './train_ckpt_outputs/'

print('Start training!')

for epoch in range(epochs):
    g_loss = []
    d_loss = []
    start_time_e = time.time()
    for step, data in enumerate(dataset.create_dict_iterator()):
        start_time_s = time.time()
        img_a = data["image_A"]
        img_b = data["image_B"]
        res_g = train_step_g(img_a, img_b)
        fake_a = res_g[0]
        fake_b = res_g[1]

        res_d = train_step_d(img_a, img_b, image_pool(fake_a), image_pool(fake_b))
        loss_d = float(res_d.asnumpy())
        step_time = time.time() - start_time_s

        res = []
        for item in res_g[2:]:
            res.append(float(item.asnumpy()))
        g_loss.append(res[0])
        d_loss.append(loss_d)

        if step % save_step_num == 0:
            print(f"Epoch:[{int(epoch + 1):>3d}/{int(epochs):>3d}], "
                  f"step:[{int(step):>4d}/{int(datasize):>4d}], "
                  f"time:{step_time:>3f}s,\n"
                  f"loss_g:{res[0]:.2f}, loss_d:{loss_d:.2f}, "
                  f"loss_g_a: {res[1]:.2f}, loss_g_b: {res[2]:.2f}, "
                  f"loss_c_a: {res[3]:.2f}, loss_c_b: {res[4]:.2f}, "
                  f"loss_idt_a: {res[5]:.2f}, loss_idt_b: {res[6]:.2f}")

    epoch_cost = time.time() - start_time_e
    per_step_time = epoch_cost / datasize
    mean_loss_d, mean_loss_g = sum(d_loss) / datasize, sum(g_loss) / datasize

    print(f"Epoch:[{int(epoch + 1):>3d}/{int(epochs):>3d}], "
          f"epoch time:{epoch_cost:.2f}s, per step time:{per_step_time:.2f}, "
          f"mean_g_loss:{mean_loss_g:.2f}, mean_d_loss:{mean_loss_d :.2f}")

    if epoch % save_checkpoint_epochs == 0:
        os.makedirs(save_ckpt_dir, exist_ok=True)
        save_checkpoint(net_rg_a, os.path.join(save_ckpt_dir, f"g_a_{epoch}.ckpt"))
        save_checkpoint(net_rg_b, os.path.join(save_ckpt_dir, f"g_b_{epoch}.ckpt"))
        save_checkpoint(net_d_a, os.path.join(save_ckpt_dir, f"d_a_{epoch}.ckpt"))
        save_checkpoint(net_d_b, os.path.join(save_ckpt_dir, f"d_b_{epoch}.ckpt"))

print('End of training!')

模型推理

第一行为原图,第二行为对应生成的结果图。

%%time
import os
from PIL import Image
import mindspore.dataset as ds
import mindspore.dataset.vision as vision
from mindspore import load_checkpoint, load_param_into_net

# 加载权重文件
def load_ckpt(net, ckpt_dir):
    param_GA = load_checkpoint(ckpt_dir)
    load_param_into_net(net, param_GA)

g_a_ckpt = './CycleGAN_apple2orange/ckpt/g_a.ckpt'
g_b_ckpt = './CycleGAN_apple2orange/ckpt/g_b.ckpt'

load_ckpt(net_rg_a, g_a_ckpt)
load_ckpt(net_rg_b, g_b_ckpt)

# 图片推理
fig = plt.figure(figsize=(11, 2.5), dpi=100)
def eval_data(dir_path, net, a):

    def read_img():
        for dir in os.listdir(dir_path):
            path = os.path.join(dir_path, dir)
            img = Image.open(path).convert('RGB')
            yield img, dir

    dataset = ds.GeneratorDataset(read_img, column_names=["image", "image_name"])
    trans = [vision.Resize((256, 256)), vision.Normalize(mean=[0.5 * 255] * 3, std=[0.5 * 255] * 3), vision.HWC2CHW()]
    dataset = dataset.map(operations=trans, input_columns=["image"])
    dataset = dataset.batch(1)
    for i, data in enumerate(dataset.create_dict_iterator()):
        img = data["image"]
        fake = net(img)
        fake = (fake[0] * 0.5 * 255 + 0.5 * 255).astype(np.uint8).transpose((1, 2, 0))
        img = (img[0] * 0.5 * 255 + 0.5 * 255).astype(np.uint8).transpose((1, 2, 0))

        fig.add_subplot(2, 8, i+1+a)
        plt.axis("off")
        plt.imshow(img.asnumpy())

        fig.add_subplot(2, 8, i+9+a)
        plt.axis("off")
        plt.imshow(fake.asnumpy())

eval_data('./CycleGAN_apple2orange/predict/apple', net_rg_a, 0)
eval_data('./CycleGAN_apple2orange/predict/orange', net_rg_b, 4)
plt.show()

总结

CycleGAN实现了一种在没有配对示例的情况下学习将图像从源域 X 转换到目标域 Y 的方法,它只需要两种域的数据,而不需要他们有严格对应关系,是一种新的无监督的图像迁移网络。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/765392.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

2023-2024华为ICT大赛中国区 实践赛网络赛道 全国总决赛 理论部分真题

Part1 数通模块(10题)&#xff1a; 1、如图所示&#xff0c;某园区部署了IPv6进行业务测试&#xff0c;该网络中有4台路由器&#xff0c;运行OSPFv3实现网络的互联互通&#xff0c;以下关于该OSPFv3网络产生的LSA的描述&#xff0c;错误的是哪一项?(单选题) A.R1的LSDB中将存在…

Java高级重点知识点-13-数据结构、List集合、List集合的子类

文章目录 数据结构List集合List的子类&#xff08;ArrayList集、LinkedList集&#xff09; 数据结构 栈 stack,又称堆栈&#xff0c;它是运算受限的线性表&#xff0c;其限制是仅允许在标的一端进行插入和删除操作&#xff0c;不允许在其他任何位置进行添加、查找、删除等操作…

如何下载huggingface仓库里某一个文件

如何下载huggingface仓库里某一个文件&#xff1a; https://huggingface.co/PixArt-alpha/PixArt-Sigma/tree/main 直接用命令&#xff1a; wget https://huggingface.co/PixArt-alpha/PixArt-Sigma/resolve/main/PixArt-Sigma-XL-2-2K-MS.pth

30个!2024重大科学问题、工程技术难题和产业技术问题发布

【SciencePub学术】中国科协自2018年开始&#xff0c;组织开展重大科技问题难题征集发布活动&#xff0c;引导广大科技工作者紧跟世界科技发展大势&#xff0c;聚焦国家重大需求&#xff0c;开展原创性、引领性研究&#xff0c;不断夯实高质量发展的科技支撑。 自2024年征集活动…

南京林业大学点云相关团队论文

【1】Chen Dong, Wan Lincheng, Hu Fan, Li Jing, Chen Yanming, Shen Yueqian*, Peethambaran Jiju, 2024. Semantic-aware room-level indoor modeling from point clouds, International Journal of Applied Earth Observation and Geoinformation, 2024, 127, 103685. 语义…

QT5 static_cast实现显示类型转换

QT5 static_cast实现显示类型转换&#xff0c;解决信号重载情况

一款十六进制编辑器,你的瑞士军刀!!【送源码】

软件介绍 ImHex是一款功能强大的十六进制编辑器&#xff0c;专为逆向工程师、程序员以及夜间工作的用户设计。它不仅提供了基础的二进制数据编辑功能&#xff0c;还集成了一系列高级特性&#xff0c;使其成为分析和修改二进制文件的理想工具。 功能特点 专为逆向工程、编程和夜…

【AI】Image Inpainting

学习参考摘抄来自&#xff1a;大模型修复徐克经典武侠片&#xff0c;「全损画质」变4K&#xff0c;还原林青霞40年前绝世美貌 火山引擎多媒体实验室 &#xff08;1&#xff09;清晰度 去噪、去压缩、去模糊、超分辨率、人像增强 &#xff08;2&#xff09;流畅度 智能插帧算…

3.js - 纹理的重复、偏移、修改中心点、旋转

你瞅啥 上字母 // ts-nocheck // 引入three.js import * as THREE from three // 导入轨道控制器 import { OrbitControls } from three/examples/jsm/controls/OrbitControls // 导入lil.gui import { GUI } from three/examples/jsm/libs/lil-gui.module.min.js // 导入twee…

补浏览器环境

一&#xff0c;导言 // global是node中的关键字&#xff08;全局变量&#xff09;&#xff0c;在node中调用其中的元素时&#xff0c;可以直接引用&#xff0c;不用加global前缀&#xff0c;和浏览器中的window类似&#xff1b;在浏览器中可能会使用window前缀&#xff1a;win…

Latex写作工具整理(Overleaf)

一、公式&#xff08;MathType&#xff09; 先用MathType编辑好公式&#xff0c;再粘贴到Overleaf 预置-剪切和复制预置-选择“MathML或Tex"-确定 1.行内公式 粘贴到overleaf里面把两侧的" \["替换成"$" $ A $ 2.单行公式 \begin{equation}\labe…

Mysql并发控制和日志

文章目录 一、并发控制锁机制事务&#xff08;transactions&#xff09;事务隔离级别 二、日志事务日志错误日志通用日志慢查询日志二进制日志 备份在线查看二进制离线查看二进制日志 一、并发控制 锁机制 锁类型&#xff1a; 读锁&#xff1a;共享锁&#xff0c;也称为 S 锁…

方法种类的详解

1.有参无返回值 会出现在什么场景呢&#xff1f;比如我现在需要得到两个数&#xff08;哪里调用&#xff0c;哪里就给我&#xff09;&#xff0c;求和打印或者是打印三个数之和。 语法&#xff1a; 定义的语法&#xff1a; 修饰符 返回类型 方法名&#xff08;形参数1类型 …

[22] Opencv_CUDA应用之 使用背景相减法进行对象跟踪

Opencv_CUDA应用之 使用背景相减法进行对象跟踪 背景相减法是在一系列视频帧中将前景对象从背景中分离出来的过程&#xff0c;它广泛应用于对象检测和跟踪应用中去除背景 背景相减法分四步进行&#xff1a;图像预处理 -> 背景建模 -> 检测前景 -> 数据验证 预处理去除…

【Excel操作】Python Pandas判断Excel单元格中数值是否为空

判断Excel单元格中数值是为空&#xff0c;主要有下面两种方法&#xff1a; 1. pandas.isnull 2. pandas.isna判断Excel不为空&#xff0c;也有下面两种方法&#xff1a; 1. pandas.notna 2. pandas.notnull假设有这样一张Excel的表格 我们来识别出为空的单元格 import panda…

C#的五大设计原则-solid原则

什么是C#的五大设计原则&#xff0c;我们用人话来解释一下&#xff0c;希望小伙伴们能学会&#xff1a; 好的&#xff0c;让我们以一种幽默的方式来解释C#的五大设计原则&#xff08;SOLID&#xff09;&#xff1a; 单一职责原则&#xff08;Single Responsibility Principle…

【PYG】Cora数据集分类任务计算损失,cross_entropy为什么不能直接替换成mse_loss

cross_entropy计算误差方式&#xff0c;输入向量z为[1,2,3]&#xff0c;预测y为[1]&#xff0c;选择数为2&#xff0c;计算出一大坨e的式子为3.405&#xff0c;再用-23.405计算得到1.405MSE计算误差方式&#xff0c;输入z为[1,2,3]&#xff0c;预测向量应该是[1,0,0]&#xff0…

Python + 在线 + 中文文本转为英文语音,语音转为中文文本

模型 平台&#xff1a;https://huggingface.co/ars-语言转文本: pipeline("automatic-speech-recognition", model"openai/whisper-large-v3", device0 )tts-文本转语音&#xff1a;pipeline("text-to-speech", "microsoft/speecht5_tts&q…

Java8新特性之Optional、Lambda表达式、Stream、新日期Api使用总结

标题 Optional类常用Api使用说明Optional API 使用建议 Lambda表达式Lambda 表达式的局限性与解决方案 Stream案例实操优化 Stream 操作 新的日期和时间APILocalDateTime在SpringBoot中的应用 函数式接口&#xff08;Functional&#xff09; Optional博客参考地址1 Stream博客参…

【Linux从入门到放弃】探究进程如何退出以进程等待的前因后果

&#x1f9d1;‍&#x1f4bb;作者&#xff1a; 情话0.0 &#x1f4dd;专栏&#xff1a;《Linux从入门到放弃》 &#x1f466;个人简介&#xff1a;一名双非编程菜鸟&#xff0c;在这里分享自己的编程学习笔记&#xff0c;欢迎大家的指正与点赞&#xff0c;谢谢&#xff01; 进…