登录    注册    忘记密码

详细信息

基于隐私保护的反向传播神经网络学习算法    

Back-propagation Neural Network Learning Algorithm Based on Privacy Preserving

文献类型:期刊文献

中文题名:基于隐私保护的反向传播神经网络学习算法

英文题名:Back-propagation Neural Network Learning Algorithm Based on Privacy Preserving

作者:王健[1]

第一作者:王健

机构:[1]河南财经政法大学计算机与信息工程学院,郑州450000

第一机构:河南财经政法大学计算机与信息工程学院

年份:2022

卷号:49

期号:S01

起止页码:575-580

中文期刊名:计算机科学

外文期刊名:Computer Science

收录:CSTPCD;;北大核心:【北大核心2020】;CSCD:【CSCD_E2021_2022】;

基金:河南省科技厅科技攻关项目(222102210289)。

语种:中文

中文关键词:神经网络;隐私保护;安全多方计算;隐私泄露

外文关键词:Neural network;Privacy preserving;Secure multiparty computation;Privacy leakage

摘要:反向传播神经网络学习算法已经被广泛地应用在医疗诊断、生物信息学、入侵检测、国土安全等领域。这些应用领域的共同点是,都需要从大量的复杂的数据中抽取模式和预测趋势。在以上这些应用领域中,如何保护敏感数据和个人隐私信息是一个重要的问题。目前已有的反向传播神经网络学习算法,绝大多数都没有考虑在学习过程中如何保护数据的隐私信息。文中为反向传播神经网络提出基于隐私保护的算法,适用于数据被水平分割的情况。在建造神经网络的过程中,需要为训练样本集计算网络权向量。为了保证神经网络学习模型的隐私信息不被泄露,本文提出将权向量分配给所有参与方,使得每个参与方都具有权向量的一部分私有值。在对各层的神经元进行计算时,使用安全多方计算协议,从而保证神经网络权向量的中间值和最终值都是安全的。最后,被建造好的学习模型被所有参与方安全地共享,并且每个参与方可以使用该模型为各自的目标数据预测出相应的输出结果。实验结果表明,所提算法在执行时间和准确度误差上比传统非隐私保护算法更具优越性。
Back-propagation neural network learning algorithms based on privacy preserving are widely used in medical diagnosis,bioinformatics,intrusion detection,homeland security and other fields.The common of these applications is that all of them need to extract patterns and predict trends from a large number of complex data.In these applications,how to protect the privacy of sensitive data and personal information from disclosure is an important issue.At present,the vast majority of existing back-propagation neural network learning algorithms don’t consider how to protect the data privacy in the process of learning.This paper proposes a back propagation neural network algorithm based on privacy-preserving,which is suitable for horizontally partitioned data.In the construction process of neural networks,it is need to compute network weight vector for the training sample set.To ensure the private information of neural network learning model can not be leaked,the weight vector will be assigned to all participants,so that each participant owns a part of private values of weight vector.In the calculation of neurons,we use secure multiparty computation,thus ensuring the middle and final values of the neural network weight vector are secure and will not be leaked.Finally,the constructed learning model will be securely shared by all participants,and each participant can use the model to predict the corresponding output for their respective target data.Experimental results show that the proposed algorithm has advantages over the traditional non-privacy protection algorithm in execution time and accuracy error.

参考文献:

正在载入数据...

版权所有©河南财经政法大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心