Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Một tiến trình đào tạo mới cho một phân lớp của các mạng neural hồi quy
MIỄN PHÍ
Số trang
4
Kích thước
402.4 KB
Định dạng
PDF
Lượt xem
1446

Một tiến trình đào tạo mới cho một phân lớp của các mạng neural hồi quy

Nội dung xem thử

Mô tả chi tiết

Nam Hoai Nguyen et al Journal of SCIENCE and TECHNOLOGY 127(13): 43 - 46

43

A NEW TRAINING PROCEDURE FOR A CLASS

OF RECURRENT NEURAL NETWORKS

Nam Hoai Nguyen1,*, Nguyet Thi Minh Trinh2

1

University of Technology – TNU, 2

Yen Bai Collegue of Technique

ABSTRACT

This work is to propose a new training procedure for a class of recurrent neural networks. Based

on reservoir computing networks, we extend their network structure from one delay to more than

one delay and modify their training method. The novel training method is demonstrated on a

benchmark problem and an experimental robot arm and compared to traditional training methods.

The result shows that the proposed training procedures give some better advantages such as

smaller number of weights and biases andfaster training time.

Keywords: Recurrent neural networks, reservoir computing network, echo state network, training

procedure, system identification, one link robot arm.

INTRODUCTION*

Reservoir computing networks (RCNs) have

been successfully used for time series

prediction. There are two major types of

RCNs: Liquid-state machines [1] and Echo￾state networks [2]. An input signal is fed into

a fixed weights dynamic network called

reservoir and the dynamics of the reservoir

map the input to the reservoir’s state. Then a

simple readout mechanism is trained to read

the state of the reservoir and map it to the

desired output.

The capability of system identification of

RCNs is limited because of being only first

order models. Thus, RCNs are unable to

identify systems of higher order. But we can

apply the philosophy of RCNs training to

train other types of recurrent neural networks.

Here we focus on the structure of neural

networks given in Fig. 14 of the work [3]. It

can be shown as in Fig. 1. This type of neural

networks is widely used in identification and

control of dynamic nonlinear systems.

In the next section, a new training procedure

is proposed. A structure of recurrent neural

networks is described and a novel training

method is given. The following section is

applications of the proposed training

procedure to systems identification. Two

examples are represented. In the final section,

conclusions and future work are provided.

* Tel: 0917987683; Email: [email protected]

PROPOSED TRAINING PROCEDURE

Consider a class of recurrent neural networks

given in Fig.1. This network has two layers

with one input and one output. For

convennience, we strictly use mathematical

notations for equations and figures given in

[4]. The input is passed through delays called

TDL. The output is also passed through TDL

and then applied to the first layer. The block

TDL are tapped delay lines. Its output is an

N-dimensional vector, made up of the input

signal at the current time and/or input signal

in the past. IWk,l is an input weight matrixand

LWk,l is a layer weight matrix. Superscripts k

and l are used to identify the source (l)

connection and the destination (k) connection

of layer weight matrices and input weight

matrices.bi

, ni

, ai

, Si and f

i are bias vector, net

input, layer output, number of neurons and

transfer function of the layer i (i=1, 2),

respectively. In this case, S2

=1 and f

2 is a

linear function.

For traditional training, all weights and biases

are updated after each epoch. But for RCNs,

only LW2,l is trained and b2

=0. The limitation

of RCNs is that the order of the network is

less than 2. So it can not be applied to identify

systems of higher orders. Thus, we extend its

structure to the network given in Fig. 1. In

addition, based on training method of RCNs

we modify the classical training by fixing

only feedback weights LW1,2 during the

training.

Tải ngay đi em, còn do dự, trời tối mất!