Voice emotion feature fusion method and system based on main and auxiliary networks
The invention provides a voice emotion feature fusion method and a system based on main and auxiliary networks. The voice emotion feature fusion method comprises the following steps: respectively inputting a plurality of first features and second features corresponding to each piece of voice emotion...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Patent |
Language | Chinese English |
Published |
12.05.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The invention provides a voice emotion feature fusion method and a system based on main and auxiliary networks. The voice emotion feature fusion method comprises the following steps: respectively inputting a plurality of first features and second features corresponding to each piece of voice emotion data in a test set into the lower half part of a main network model with parameters and an auxiliary network model to obtain main network high-level features and auxiliary network high-level features corresponding to each piece of voice emotion data; performing feature fusion on the main network high-level features, auxiliary parameters and the auxiliary network high-level features, and determining main and auxiliary network fusion features corresponding to the voice emotion data; and inputtingthe main and auxiliary network fusion features corresponding to the voice emotion data into the upper half part of the main network model with parameters to obtain fusion features. According to the method, multiple types of |
---|---|
Bibliography: | Application Number: CN201911368375 |