Friday, June 30, 2017

Book: Platform Revolution: How Networked Markets Are Transforming the Economy--And How to Make Them Work for You by Geoffrey G. Parker and Marshall W. Van Alstyne


A practical guide to the new economy that is transforming the way we live, work, and play.
Uber. Airbnb. Amazon. Apple. PayPal. All of these companies disrupted their markets when they launched. Today they are industry leaders. What’s the secret to their success?
These cutting-edge businesses are built on platforms: two-sided markets that are revolutionizing the way we do business. Written by three of the most sought-after experts on platform businesses, Platform Revolution is the first authoritative, fact-based book on platform models. Whether platforms are connecting sellers and buyers, hosts and visitors, or drivers with people who need a ride, Geoffrey G. Parker, Marshall W. Van Alstyne, and Sangeet Paul Choudary reveal the whathow, and why of this revolution and provide the first “owner’s manual” for creating a successful platform business.
Platform Revolution teaches newcomers how to start and run a successful platform business, explaining ways to identify prime markets and monetize networks. Addressing current business leaders, the authors reveal strategies behind some of today’s up-and-coming platforms, such as Tinder and SkillShare, and explain how traditional companies can adapt in a changing marketplace. The authors also cover essential issues concerning security, regulation, and consumer trust, while examining markets that may be ripe for a platform revolution, including healthcare, education, and energy.
As digital networks increase in ubiquity, businesses that do a better job of harnessing the power of the platform will win. An indispensable guide, Platform Revolution charts out the brilliant future of platforms and reveals how they will irrevocably alter the lives and careers of millions.

Book: Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy by George Gilder

You can say goodbye to today's Internet, New York Times bestselling author George Gilder says. Soon the current model of aggregated free content populated with "value-subtracted" advertising will die a natural death, due, of course, to the simple fact that absolutely no one wants to see online advertising. What will tomorrow's Internet look like? In Life After Google, Gilder takes readers on a brilliant, rocketing journey into the very near-future, into an Internet with a new "bitcoin-bitgold" transaction layer that will replace spam with seamless micro-payments and provide an all-new standard for global money.

Book: The Samsung Way: Transformational Management Strategies from the World Leader in Innovation and Design by Jaeyong Song and Kyungmook Lee


Learn how to manage, lead, and succeed . . . the Samsung way.

Based on ten years of research and interviews with 80 top executives, the award-winning The Samsung Way is the first definitive guide to the groundbreaking management principles that transformed a lagging electronics company into one of the most successful brands in the world.
Combining professional insights from Samsung insiders with practical applications for managers, executives, and CEOs, this powerhouse of a book shows you how to:
  • Speed up decision making and execution, on a bigger scale.
  • Create a convergence synergy among diversified businesses, while staying competitive in core businesses.
  • Mix and match Western and Eastern management styles.
Also known as “The Three Paradoxes of Samsung Management,” these seemingly contradictory goals are the keys behind Chairman Lee Kun-Hee’s now-famous New Management Initiative―the business plan that drove Samsung to become the number-one leader in mobile phones, televisions, semiconductors, and other electronics. A revolutionary―and time-tested―approach to innovation, Samsung’s management principles will help you find the perfect balance of styles by combining the best of all worlds.
This ingenious step-by-step guide shows you how to implement Samsung’s proven techniques for grafting American business practices onto a Japanese system, thus keeping costs low and bringing about differentiation. You’ll learn how to achieve both economies of scale and speed in this era of a hypercompetitive world. Best of all, you’ll drive new ideas and innovations at every level of your company while building on your greatest strengths and successes.
That’s The Samsung Way.
Praise for The Samsung Way
“To remain competitive in today’s global marketplace, GE must benchmark itself against the best run companies in the world. Samsung is one of these companies. This insightful book outlines Samsung’s formula for success and is an important read for any executive or leader who wishes to implement a similar plan in their own organization.”
―Jeff Immelt, Chairman and CEO of GE
“If I were to be asked about how Samsung Electronics became successful, I would confidently recommend The Samsung Way in lieu of a response. As CEO of Samsung Electronics, I am still amazed by the insightful analyses and explanations given here. This book led me to reconsider the direction of Samsung’s future strategy.”
―Oh-Hyun Kwon, Vice-Chairman and CEO of Samsung Electronics
“A firsthand glimpse of how an unlikely laggard in emerging economies became a global power to be reckoned with. This informed, readable book will help you understand what management innovation is really all about.”
―Rita McGrath, Professor, Columbia Business School, and author of The End of Competitive Advantage
“Samsung has emerged as the most intriguing, and to its rivals most threatening, global company from Asia. This book provides both detailed insights into how Samsung rose to global prominence and developed a new management model, transcending contradictions to combine the best from East and West. A fascinating read!”
―Yves Doz, Solvay Chaired Professor of Technological Innovation, INSEAD
“This is the first in-depth, behind-the-scenes look at how Samsung achieved its current success as one of the world’s foremost corporations. It will be of great interest to executives, managers, and companies who need to upgrade their game to world-class status and beyond.”
―Pankaj Ghemawat, Anselmo Rubiralta Professor, IESE Business School
“The herculean efforts of Professors Song and Lee go some distance in demystifying the secrets of The Samsung Way. There are lessons here for the behemoths of the developed world, as well as tomorrow’s challengers from the emerging world.”
―Tarun Khanna, Jorge Paulo Lemann Professor, Harvard Business School

Tuesday, June 27, 2017

Walmart - Residential Upgrade Design Tool - US Patent Application 20170177748

United States Patent Application20170177748
Kind CodeA1
High; Donald ;   et al.June 22, 2017

Residential Upgrade Design Tool 


Abstract
A user inputs an address to a computer system, which retrieves an aerial image of the lot and identifies features visible in the image. A user selects a feature and a scanning drone is programmed with GPS coordinates of the boundary of the feature. The drone scans the feature and a model is generated. Upgrades for the feature are selected based on local weather and other factors. The model is updated according to a selected upgrade and rendered for a user. Training materials may be generated that include illustrations of the model modified to show intermediate stages of applying an upgrade. Materials required to apply the upgrade are determined from a surface area of the feature, including taking into account texture detected by scanning.

Inventors:High; Donald(Noel, MO) ; Winkle; David(Bella Vista, AR) ; Atchley; Michael Dean(Springsdale, AR)
Applicant:
NameCityStateCountryType

Wal-Mart Stores, Inc.

Bentonville

AR

US
Family ID:1000002359081
Appl. No.:15/379332
Filed:December 14, 2016

Monday, June 26, 2017

Google's Stereo Autofocus - US Patent Application 20170171456


United States Patent Application20170171456
Kind CodeA1
Wei; JianingJune 15, 2017

Stereo Autofocus 

Abstract
A first image capture component may capture a first image of a scene, and a second image capture component may capture a second image of the scene. There may be a particular baseline distance between the first image capture component and the second image capture component, and at least one of the first image capture component or the second image capture component may have a focal length. A disparity may be determined between a portion of the scene as represented in the first image and the portion of the scene as represented in the second image. Possibly based on the disparity, the particular baseline distance, and the focal length, a focus distance may be determined. The first image capture component and the second image capture component may be set to focus to the focus distance.

Coinbase Bitcoin Exchange - US Patent Application 20150262139

United States Patent Application20150262139
Kind CodeA1
Shtylman; RomanSeptember 17, 2015

BITCOIN EXCHANGE 


Abstract
A system and method for transaction bitcoin is described. Bitcoin can be sent to an email address. No miner's fee is paid by a host computer system. Hot wallet functionality is provided that transfers values of some Bitcoin addresses to a vault for purposes of security. A private key of a Bitcoin address of the vault is split and distributed to keep the vault secure. Instant exchange allows for merchants and customers to lock in a local currency price. A vault has multiple email addresses to authorize a transfer of bitcoin out of the vault. User can opt to have private keys stored in locations that are under their control. A tip button rewards content creators for their efforts. A bitcoin exchange allows for users to set prices that they are willing to sell or buy bitcoin and execute such trades.

Inventors:Shtylman; Roman(New York, NY)
Applicant:
NameCityStateCountryType

Coinbase, Inc.

San Francisco

CA

US
Assignee:Coinbase, Inc.
San Francisco
CA
Family ID:54069270
Appl. No.:14/660440
Filed:March 17, 2015

Samsung's rollable display -

A figure of Samsung's rollable display - US Patent 9,685,100
Samsung was granted a patent for a rollable display with a housing for a flexible display panel (Image Source: USPTO)

United States Patent9,685,100
Choi ,   et al.June 20, 2017

Rollable display 


Abstract
A rollable display is disclosed. In one aspect, the rollable display includes a flexible display panel configured to display an image via at least a portion thereof and a housing accommodating at least a portion of the flexible display panel. The flexible display panel has a point of inflection in the portion of the flexible display panel accommodated in the housing.

Inventors:Choi; Kyungmin (Seoul, KR), Kim; Youn Joon (Seoul, KR), Lee; Sangjo (Hwaseong-si, KR), Lee; Junghun (Hwaseong-si, KR), Lee; Jusuck (Seoul, KR)
Applicant:
NameCityStateCountryType

Samsung Display Co., Ltd.

Yongin, Gyeonggi-do

N/A

KR
Assignee:Samsung Display Co., Ltd. (Gyeonggi-do, KR
Family ID:1000002660187
Appl. No.:14/709,078
Filed:May 11, 2015

Prior Publication Data

Document IdentifierPublication Date
US 20160135284 A1May 12, 2016

Foreign Application Priority Data

Nov 12, 2014 [KR]10-2014-0157422
Current U.S. Class:1/1
Current CPC Class:G09F 9/301 (20130101); G06F 1/1652 (20130101)
Current International Class:H05K 1/00 (20060101); G09F 9/30 (20060101); G06F 1/16 (20060101)
Field of Search:;361/728,749,752,753 ;345/1.1,85,107

Baidu's voice authentication patent - US Patent Application 20160379644 Voiceprint authentication method and apparatus

United States Patent Application20160379644
Kind CodeA1
Li; Chao ;   et al.December 29, 2016

Voiceprint authentication method and apparatus 


Abstract
The present disclosure provides a voiceprint authentication method and a voiceprint authentication apparatus. The method includes: displaying a first character string to a user, in which the first character string includes a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; obtaining a speech of the first character string read by the user; obtaining a first voiceprint identity vector of the speech of the first character string; comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication.

Inventors:Li; Chao(Beijing, CN) ; Wang; Zhijian(Beijing, CN)
Applicant:
NameCityStateCountryType

BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.

Beijing

CN
Family ID:54576487
Appl. No.:14/757927
Filed:December 23, 2015

Current U.S. Class:704/273
Current CPC Class:G06F 21/32 20130101; G10L 17/24 20130101; G10L 17/005 20130101; G10L 15/10 20130101; G06Q 20/40145 20130101
International Class:G10L 17/24 20060101 G10L017/24; G10L 17/00 20060101 G10L017/00; G10L 15/10 20060101 G10L015/10

Foreign Application Data

DateCodeApplication Number
Jun 25, 2015CN201510358723.3

Claims



1. A voiceprint authentication method, comprising: displaying a first character string to a user, wherein the first character string comprises a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; obtaining a speech of the first character string read by the user; obtaining a first voiceprint identity vector of the speech of the first character string; and comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

2. The method according to claim 1, wherein, before obtaining a first voiceprint identity vector of the speech of the first character string, the method further comprises: performing a speech recognition on the speech of the first character string to judge whether a speech of the symbol in the speech of the first character string corresponds to the predilection character; and obtaining the first voiceprint identity vector if the speech of the symbol in the speech of the first character string corresponds to the predilection character. 

3. The method according to claim 1, wherein comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication result comprises: calculating a matching value between the first voiceprint identity vector and the second voiceprint identity vector; determining that the voiceprint authentication is successful if the matching value is greater than or equals to a preset threshold; and determining that the voiceprint authentication is failed if the matching value is less than the preset threshold. 

4. The method according to claim 2, wherein comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication result comprises: calculating a matching value between the first voiceprint identity vector and the second voiceprint identity vector; determining that the voiceprint authentication is successful if the matching value is greater than or equals to a preset threshold; and determining that the voiceprint authentication is failed if the matching value is less than the preset threshold. 

5. The method according to claim 1, wherein, before displaying a first character string to a user, the method further comprises: establishing and storing a correspondence between the predilection character and the symbol. 

6. The method according to claim 2, wherein, before displaying a first character string to a user, the method further comprises: establishing and storing a correspondence between the predilection character and the symbol. 

7. The method according to claim 5, wherein, after establishing and storing a correspondence between the predilection character and the symbol, the method further comprises: displaying at least one second character string to the user, wherein the at least one second character string comprises the predilection character and the predilection character is displayed as the symbol in the at least one second character string; obtaining at least one speech of the at least one second character string read by the user; obtaining at least one voiceprint identity vector of the at least one speech; obtaining the second voiceprint identity vector according to the at least one voiceprint identity vector; and storing the second voiceprint identity vector. 

8. The method according to claim 7, wherein, before obtaining at least one voiceprint identity vector of the at least one speech, the method further comprises: performing a speech recognition on the at least one speech to judge whether a speech of the symbol in the at least one speech corresponds to the predilection character; and obtaining the at least one voiceprint identity vector if the speech of the symbol in the at least one speech corresponds to the predilection character. 

9. The method according to claim 1, wherein obtaining a first voiceprint identity vector of the speech of the first character string comprises: extracting an acoustic characteristic of the speech of the first character string; and calculating a posteriori probability of the acoustic characteristic under a universal background model, wherein the posteriori probability is subject to a Gaussian distribution, and an expectation of the posteriori probability is the first voiceprint identity vector. 

10. The method according to claim 7, wherein obtaining a first voiceprint identity vector of the speech of the first character string comprises: extracting an acoustic characteristic of the speech of the first character string; and calculating a posteriori probability of the acoustic characteristic under a universal background model, wherein the posteriori probability is subject to a Gaussian distribution, and an expectation of the posteriori probability is the first voiceprint identity vector. 

11. The method according to claim 1, wherein the first character string displayed to the user comprises plaintext characters, and the plaintext characters are not identical to each other. 

12. A voiceprint authentication apparatus, comprising: a processor; and a memory for storing instructions executable by the processor, wherein the processor is configured to: display a first character string to a user, wherein the first character string comprises a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; obtain a speech of the first character string read by the user, and obtain a first voiceprint identity vector of the speech of the first character string; and compare the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

13. The apparatus according to claim 12, wherein the processor is further configured to: perform a speech recognition on the speech of the first character string to judge whether a speech of the symbol in the speech of the first character string corresponds to the predilection character before obtaining the first voiceprint identity vector, and obtain the first voiceprint identity vector if it is determined that the speech of the symbol in the speech of the first character string corresponds to the predilection character. 

14. The apparatus according to claim 12, wherein the processor is configured to compare the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication result by: calculating a matching value between the first voiceprint identity vector and the second voiceprint identity vector; determining that the voiceprint authentication is successful if the matching value is greater than or equals to a preset threshold; and determining that the voiceprint authentication is failed if the matching value is less than the preset threshold. 

15. The apparatus according to claim 12, wherein the processor is further configured to: establish a correspondence between the predilection character and the symbol before displaying the first character string to the user; and store the correspondence. 

16. The apparatus according to claim 12, wherein the processor is further configured to: display at least one second character string to the user, in which the at least one second character string comprises the predilection character and the predilection character is displayed as the symbol in the at least one second character string; obtain at least one speech of the at least one second character string read by the user, and to obtain at least one voiceprint identity vector of the at least one speech, and to obtain the second voiceprint identity vector according to the at least one voiceprint identity vector; and store the second voiceprint identity vector. 

17. The apparatus according to claim 16, wherein the processor is further configured to: perform a speech recognition on the at least one speech to judge whether a speech of the symbol in the at least one speech corresponds to the predilection character before obtaining the at least one voiceprint identity vector; and obtain the at least one voiceprint identity vector if it is determined that the speech of the symbol in the at least one speech corresponds to the predilection character. 

18. The apparatus according to claim 12, wherein the processor is configured to obtain a first voiceprint identity vector of the speech of the first character string by: extracting an acoustic characteristic of the speech of the first character string; and calculating a posteriori probability of the acoustic characteristic under a universal background model, wherein the posteriori probability is subject to a Gaussian distribution, and an expectation of the posteriori probability is the first voiceprint identity vector. 

19. The apparatus according to claim 12, wherein the first character string displayed to the user comprises plaintext characters, and the plaintext characters are not identical to each other. 

20. A program product having stored therein instructions that, when executed by one or more processors of a device, causes the device to perform the method, wherein the method comprises: displaying a first character string to a user, wherein the first character string comprises a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; obtaining a speech of the first character string read by the user; obtaining a first voiceprint identity vector of the speech of the first character string; and comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication.

Description



CROSS REFERENCE TO RELATED APPLICATION 

[0001] This application claims priority to and benefits of Chinese Patent Application. No. 201510358723.3, filed with State Intellectual Property Office on Jun. 25, 2015, the entire content of which is incorporated herein by reference. 

FIELD 

[0002] The present disclosure generally relates to the field of authentication technology, and more particularly, to a voiceprint authentication method and a voiceprint authentication apparatus. 

BACKGROUND 

[0003] Mobile payment, generally refers to a business transaction realized via the mobile communication network for trading certain goods or service by two transaction parties via mobile terminals. The mobile terminal used in the mobile payment may be a mobile phone, a personal digital assistant (hereinafter referred to as PDA), a mobile personal computer (hereinafter referred to as PC), etc. 

[0004] In the related art, there are some ways to realize the mobile payment, such as a message payment, a code scanning payment, a fingerprint payment, etc. However, since the safety of the current mobile payment mode (such as, based on password and message authentication) is poor, once the password or the mobile phone is got by others, the payment may be accomplished, thus resulting in user's economic losses and a poor user experience. 

SUMMARY 

[0005] The present disclosure aims to solve at least one of the problems existing in the related art to at least some extent. 

[0006] Accordingly, a first objective of the present disclosure is to provide a voiceprint authentication method. With the method, by comparing the user's voiceprint with the voiceprint generated during the registration to authenticate the user's identity, the safety of payment is improved, and it is unnecessary to input the password and further to verify the password, thus improving the convenience and the efficiency of payment. Moreover, the characters may be concealed according to the user's predilection, thus satisfying the psychological requirement that the user does not want the password to be displayed in a form of plaintext and further improving the user's experience and the availability of voiceprint password. 

[0007] A second objective of the present disclosure is to provide a voiceprint authentication apparatus. 

[0008] In order to achieve above objectives, embodiments of a first aspect of the present disclosure provide a voiceprint authentication method. The method includes: displaying a first character string to a user, in which the first character string includes a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; obtaining a speech of the first character string read by the user; obtaining a first voiceprint identity vector of the speech of the first character string; comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

[0009] With the voiceprint authentication method according to embodiments of the present disclosure, the character string displayed to the user includes the predilection character preset by the user, and the predilection character is displayed as the symbol in the character string; then the speech of the character string read by the user and the voiceprint identity vector of the speech of the character string are obtained, the voiceprint identity vector of the speech of the character string is compared with the voiceprint identity vector registered by the user to determine the result of the voiceprint authentication. In this way, by comparing the user's voiceprint with the voiceprint generated during the registration to authenticate the user's identity, the safety of payment may be improved, and it is unnecessary to input the password and further to verify the password, thus improving the convenience and the efficiency of payment. Moreover, the character may be concealed according to the user's predilection, thus satisfying the psychological requirement that the user does not want the password to be displayed in a form of plaintext and further improving the user's experience and the availability of voiceprint password. 

[0010] In order to achieve above objectives, embodiments of a second aspect of the present disclosure provide a voiceprint authentication apparatus. The apparatus includes: a displaying module, configured to display a first character string to a user, in which the first character string includes a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; an obtaining module, configured to obtain a speech of the first character string read by the user, and to obtain a first voiceprint identity vector of the speech of the first character string; a determining module, configured to compare the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

[0011] In the voiceprint authentication apparatus according to embodiments of the present disclosure, the character string displayed by the displaying module to the user includes the predilection character preset by the user, in which the predilection character is displayed as the symbol in the character string; then the obtaining module obtains the speech of the character string read by the user and the voiceprint identity vector of the speech of the character string, and the determining module compares the voiceprint identity vector of the speech of the character string with the voiceprint identity vector registered by the user to determine the result of the voiceprint authentication. In this way, by comparing the user's voiceprint with the voiceprint generated during the registration to authenticate the user's identity, the safety of payment may be improved, and it is unnecessary to input the password and further to verify the password, thus improving the convenience and the efficiency of payment. Moreover, the character may be concealed according to the user's predilection, thus satisfying the psychological requirement that the user does not want the password to be displayed in a form of plaintext and further improving the user's experience and the availability of voiceprint password. 

[0012] In order to achieve above objectives, embodiments of a third aspect of the present disclosure provide a program product having stored therein instructions that, when executed by one or more processors of a device, causes the device to perform the method according to the first aspect of the present disclosure. 

[0013] Additional aspects and advantages of embodiments of present disclosure will be given in part in the following descriptions, become apparent in part from the following descriptions, or be learned from the practice of the embodiments of the present disclosure. 

BRIEF DESCRIPTION OF THE DRAWINGS 

[0014] These and other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the accompanying drawings, in which: 

[0015] FIG. 1 is a flow chart showing a voiceprint authentication method according to an embodiment of the present disclosure; 

[0016] FIG. 2 is a flow chart showing a voiceprint authentication method according to another embodiment of the present disclosure; 

[0017] FIG. 3 is a flow chart showing a registration procedure in a voiceprint authentication method according to an embodiment of the present disclosure; 

[0018] FIG. 4 is a flow chart showing a registration procedure in a voiceprint authentication method according to another embodiment of the present disclosure; 

[0019] FIG. 5 is a block diagram illustrating a voiceprint authentication apparatus according to an embodiment of the present disclosure; and 

[0020] FIG. 6 is a schematic diagram illustrating a voiceprint authentication apparatus according to another embodiment of the present disclosure. 

DETAILED DESCRIPTION 

[0021] Reference will be made in detail to embodiments of the present disclosure. Embodiments of the present disclosure will be shown in drawings, in which the same or similar elements and the elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein according to drawings are explanatory and illustrative, not construed to limit the present disclosure. In contrast, the present disclosure may include alternatives, modifications and equivalents within the spirit and scope of the appended claims. 

[0022] FIG. 1 is a flow chart showing a voiceprint authentication method according to an embodiment of the present disclosure. As shown in FIG. 1, this voiceprint authentication method may include the following steps. 

[0023] In step 101, a first character string is displayed to a user, in which the first character string includes a predilection character preset by the user and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string. 

[0024] The symbol corresponding to the predilection character is preset by the user. For example, the user may select "1" and "6" as predilection characters from ten numbers of 0.about.9, and set that "1" corresponds to symbol "#" and "6" corresponds to symbol "@", so the character string "8291765" is displayed as "829#7@5". 

[0025] In this embodiment, the symbol corresponding to the predilection character may be displayed in different displaying modes, which include but are not limit to the following forms. 

[0026] 1. Special characters, for example, the special characters in the keyboard, such as, "!", "@", "#", "$", "%", " ", "&", "*", "(", or ")", etc. 

[0027] 2. Chinese characters, for example, "", "", or "", etc. 

[0028] 3. Pictures, for example, an icon of a fruit, a small animal or a cartoon character, etc. 

[0029] In step 102, a speech of the first character string read by the user is obtained. 

[0030] In this example, the speech of "829#7@5" read by the user is obtained. 

[0031] In step 103, a first voiceprint identity vector of the speech of the first character string is obtained. 

[0032] Specifically, the first voiceprint identity vector of the speech of the first character string is obtained by performing steps of: extracting an acoustic characteristic of the speech of the first character string; calculating a posteriori probability of the acoustic characteristic under a universal background model, in which the posteriori probability is subject to a Gaussian distribution and an expectation of the posteriori probability is the first voiceprint identity vector. 

[0033] In step 104, the first voiceprint identity vector is compared with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

[0034] In the voiceprint authentication method described above, the first character string displayed to the user includes the predilection character preset by the user, in which the predilection character is displayed as the symbol corresponding to the predilection character in the first character string; then the speech of the first character string read by the user and the first voiceprint identity vector of the speech of the first character string are obtained, and the first voiceprint identity vector is compared with the second voiceprint identity vector registered by the user to determine the result of the voiceprint authentication. In this way, by comparing the user's voiceprint with the voiceprint generated during the registration to authenticate the user's identity, the safety of payment may be improved, and it is unnecessary to input the password and further to verify the password, thus improving the convenience and the efficiency of payment. Moreover, the character may be concealed according to the user's predilection, thus satisfying the psychological requirement that the user does not want the password to be displayed in a form of plaintext and further improving the user's experience and the availability of voiceprint password. 

[0035] FIG. 2 is a flow chart showing a voiceprint authentication method according to another embodiment of the present disclosure. Further, referring to FIG. 2, the voiceprint authentication method further includes the following steps before step 103 (i.e. the first voiceprint identity vector of the speech of the first character string is obtained). 

[0036] In step 201, a speech recognition is performed on the speech of the first character string to judge whether a speech of the symbol in the speech of the first character string corresponds to the predilection character. If the speech of the symbol in the speech of the first character string corresponds to the predilection character, step 103 is executed; otherwise, step 202 is executed. 

[0037] In step 202, an error is returned and the error indicates that the speech of the symbol in the speech of the first character string does not correspond to the predilection character. 

[0038] In other words, during the authentication, the user needs to read one character string, and a server may perform the speech recognition on the speech of this character string read by the user and then judge whether the speech of the symbol in the speech of this character string corresponds to the predilection character preset by the user. Only when the user reads the predilection character in a correct way, the speech of this character string is further authenticated to obtain a voiceprint identity vector of the speech of this character string. 

[0039] To prevent the fraud with sound recording, a completely random character string may be adopted during the authentication. In order to enable a character string read during the authentication to be close to the voiceprint identity vector registered by the user, the character string may include characters displayed in a form of plaintext. However, in this embodiment, the characters displayed in the form of plaintext may appear only once, i.e. one in the characters displayed in the form of plaintext is not identical to another one in the characters displayed in the form of plaintext and a predilection character concealed by a symbol may be not identical to anyone in the characters displayed in the form of plaintext (case one), or be identical to one in the characters displayed in the form of plaintext (case two). For example, when the predilection character is "1" corresponding to the symbol "#", a character string may be displayed as "2#763985" (case one), or "2#763915" (case two). 

[0040] Specifically, the authentication procedure includes three phases: a signal processing, a voiceprint comparison and a consistency judgment. The signal processing includes a signal pre-emphasis, a voice activity detection (hereinafter referred to as VAD), an acoustic characteristic extracting and a characteristic processing on the speech of the character string read by the user, etc. 

[0041] The voiceprint comparison and the consistency judgment refer to step 104, in which the first voiceprint identity vector is compared with the second voiceprint identity vector registered by the user to determine the result of the voiceprint authentication. Specifically, referring to FIG. 2, step 104 may include the following steps. 

[0042] In step 203, a matching value between the first voiceprint identity vector and the second voiceprint identity vector registered by the user is calculated. 

[0043] Specifically, the matching value between the first voiceprint identity vector and the second voiceprint identity vector registered by the user may be calculated by: comparing the first voiceprint identity vector (identity vector, hereinafter referred to as ivector) generated during the authentication with the second voiceprint ivector generated during the registration and scoring according to a comparison result. In specific implementations, cosine distance, Support Vector Machine (hereinafter referred to as SVM), Bayesian Classifier or Gauss Probabilistic Linear Discriminant Analysis (hereinafter referred to as GPLDA) may be adopted. In the following, the comparing and scoring procedure will be described by taking GPLDA method as an example. 

[0044] It is assumed that the first voiceprint ivector during the authentication is expressed by .eta..sub.1, and the second voiceprint ivector stored in the server during the registration is expressed by .eta..sub.2. There may be two assumptions: H.sub.1 indicates that the person who reads the first character string is the registered user, and H.sub.0 indicates that the person who read the first character string is different from the registered user. Therefore, the logarithmic likelihood ratio of this hypothesis testing is obtained according to formula (1), 

score = log P ( .eta. 1 , .eta. 2 H 1 ) P ( .eta. 1 H 0 ) P ( .eta. 2 H 0 ) ( 1 ) ##EQU00001## 

here, assuming that the conditional probability distributions of both the numerator and the denominator are subject to a Gaussian distribution and the expectations thereof both are 0. Therefore, formula (1) may be simplified as formula (2), 

score = log N ( [ .eta. 1 .eta. 2 ] ; [ 0 0 ] [ tot ac ac tot ] ) - log N log N ( [ .eta. 1 .eta. 2 ] ; [ 0 0 ] [ tot 0 0 tot ] ) = .eta. 1 t Q .eta. 1 + .eta. 2 t Q .eta. 2 + 2 .eta. 1 t P .eta. 2 + const ( 2 ) ##EQU00002## 

in which 

Q=.SIGMA..sub.tot.sup.-1-(.SIGMA..sub.tot-.SIGMA..sub.ac.SIGMA..sub.tot.- sup.-1.SIGMA..sub.ac).sup.-1 

P=.SIGMA..sub.tot.sup.-1.SIGMA..sub.ac(.SIGMA..sub.tot-.SIGMA..sub.ac.SI- GMA..sub.tot.sup.-1.SIGMA..sub.ac).sup.-1 

.SIGMA..sub.tot=.PHI..PHI..sup.t+.SIGMA. 

.SIGMA..sub.ac=.PHI..PHI..sup.t 

where .PHI. and .SIGMA. are obtained from a training stage of GPLDA model, which can be extracted directly. The GPLDA model is denoted as 

.eta..sub.r=m+.PHI..beta.+.epsilon..sub.r (3) 

where .eta..sub.r represents the voiceprint ivector of a r.sup.th person, .beta. represents a true value of the voiceprint of the r.sup.th person, which is a hidden variable and cannot be obtained directly, .PHI. represents a transfer matrix, .epsilon..sub.r represents an observational error and is subject to the Gaussian distribution N(0,.SIGMA.) 

[0045] Moreover, in this embodiment, the multi-classifier fusion is supported. In other words, during the authentication, for one kind of acoustic characteristic, multiple classification algorithms may be adopted, for example, SVM, GPLDA and the cosine distance may be used simultaneously, and then a score fusion may be performed on scores of these three classifiers to obtain a final score. 

[0046] Moreover, in this embodiment, the fusion of multiple characteristics may be supported. In other words, multiple kinds of acoustic characteristics may be extracted, and then scores may be obtained using the same classifier or different classifiers and then these scores may be fused. For example, Mel Frequency Cepstral Coefficient (hereafter referred to as MFCC) and Perceptual Linear Predictive (hereafter referred to as PLP) of a speech may be extracted simultaneously, and then the voiceprint ivector based on MFCC and the voiceprint ivector based on PLP may be obtained, and then GPLDA classifier may be used to obtain two scores, and then these two scores are fused to obtain a score. 

[0047] In step 204, it is determined whether the matching value is greater than or equals to a preset threshold. If yes, step 205 is executed; otherwise, step 206 is executed. 

[0048] The preset threshold may be set according to the system performance and/or realization requirements in the specific implementations. A value of the above preset threshold will be not limited herein. 

[0049] In step 205, the voiceprint authentication of the user is successful. 

[0050] In step 206, the voiceprint authentication of the user is failed. 

[0051] The authentication procedure is described above. It should be understood that, the authentication procedure described above can be used in occasions, such as a payment occasion and/or an identity authentication occasion, in which it is necessary to authenticate the user' identity. 

[0052] In embodiments of the present disclosure, a registration procedure may be performed before the authentication to obtain the second voiceprint identity vector registered by the user. FIG. 3 is a flow chart showing a registration procedure in a voiceprint authentication method according to an embodiment of the present disclosure. As shown in FIG. 3, the registration procedure may include the following steps. 

[0053] In step 301, a correspondence between the predilection character and the symbol is established and stored. 

[0054] For example, the user may select any number from the numbers of 0.about.9 as the predilection character according to his/her own predilection, for example, "1" and "6" may be selected as the predilection characters, and then may set that "1" corresponds to symbol "#" and "6" corresponds to symbol "@". Then, the server needs to establish and store the correspondence between "1" and "#" and the correspondence between "6" and "@". 

[0055] In this embodiment, the symbol corresponding to the predilection character may be displayed in different displaying modes, which include but are not limit to the following forms. 

[0056] 1. Special characters, for example, the special characters in the keyboard, "!", "@", "#", "$", "%", " ", "&", "*", "(", or ")", etc. 

[0057] 2. Chinese characters, for example, "", "", or "", etc. 

[0058] 3. Pictures, for example, an icon of a fruit, a small animal or a cartoon character, etc. 

[0059] In step 302, at least one second character string is displayed to the user, in which a second character string includes the predilection character and the predilection character is displayed as the symbol corresponding to the predilection character in the second character string. 

[0060] The second character string displayed to the user may include characters displayed in a form of plaintext, and the characters displayed in the form of plaintext are not identical. 

[0061] In order to improve the safety and to prevent the fraud with sound recording, the at least one second character string displayed to the user is a completely random character string, and there is no rule to follow. In order to be able to cover a bigger sample space, the numbers in the second character string may appear only once, that is, one in the characters displayed in the form of plaintext in this second character string is not identical to another one in the characters displayed in the form of plaintext in this second character string. For example: the second character string may be "32149658", but cannot be "32149628" with "2" appearing twice. Simultaneously, the second character string contains the predilection character preset by the user. 

[0062] In step 303, at least one speech of the at least one second character string read by the user is obtained. 

[0063] During the registration, the user reads the at least one second character string, in which predilection numbers may be displayed as specific symbols. For example, a character string is "32#49@58", and the user needs to read as "32149658". 

[0064] In step 304, at least one voiceprint identity vector of the at least one speech is obtained. 

[0065] Specifically, the at least one voiceprint identity vector of the at least one speech is obtained by: extracting an acoustic characteristic of the at least one speech; calculating a posteriori probability of the acoustic characteristic under a universal background model, in which the posteriori probability is subject to a Gaussian distribution and an expectation of the posteriori probability is the at least one voiceprint identity vector of the at least one speech. 

[0066] In this embodiment, a voiceprint identity vector may be obtained by adopting the current advanced modeling method of identity vector (hereafter referred to as ivector) and this modeling method includes: a signal processing phase and a modeling phase. The signal processing phase includes signal pre-emphasis, voice activity detection (VAD), acoustic characteristic extraction and characteristic processing. During the modeling phase, a Baum-Welch statistics is performed on the acoustic characteristic (for example, MFCC) of each speech under the universal background model (hereafter referred to as UBM), and the posteriori probability of the acoustic characteristic may be calculated, which is subject to a Gaussian distribution, and the expectation of the posteriori probability is the ivector. For example, a speech u is segmented into L frames of acoustic characteristic set {y.sub.1, y.sub.2, . . . , y.sub.L}, and the characteristic dimension is D. 

[0067] 0-order Baum-Welch statistics and 1-order Baum-Welch statistics may be calculated on UBM model including C Gaussian models, which may be denoted as follows: 

N c = t = 1 L P ( c y t , .OMEGA. ) ##EQU00003## F c = t = 1 L P ( c y t , .OMEGA. ) ( y t - m c ) ##EQU00003.2## 

where c=1, 2, . . . , C represents an index of a Gaussian model, P(c|y.sub.t,.OMEGA.) represents the posteriori probability of an acoustic characteristic y.sub.t on a c.sup.th Gaussian model, and m.sub.c represents an expectation of the c.sup.th Gaussian model. The voiceprint ivector of the speech u may be obtained by the following formula: 

.eta.=(I+T'.SIGMA..sup.-1NT).sup.-1T.sup.t.SIGMA..sup.-1F 

where N is a square matrix with a dimension being CD.times.CD and diagonal elements being N.sub.cI(c=1, . . . , C), I is a unit diagonal matrix, F is a vector with a dimension being CD.times.1 and combined with all 1-order statistics F.sub.c, T and .SIGMA. are a transfer matrix and a covariance matrix of an extractor of the voiceprint ivector respectively, which are obtained during a training phase by a factor analysis and may be extracted directly, and operator ().sup.t represents the matrix transposition. 

[0068] In step 305, the second voiceprint identity vector registered by the user is obtained according to the at least one voiceprint identity vector of the at least one speech. 

[0069] It is assumed that K character strings are adopted during the registration and a separate voiceprint ivector may be extracted from each character string. After the user reads all the K character strings, K voiceprint ivectors may be combined to obtain a unique voiceprint ivector of the user for representing the voiceprint characteristic of the user. This calculation may be shown as follows: 

.eta. ~ = norm ( 1 K k = 1 K norm ( .eta. k ) ) ##EQU00004## 

where operator norm () represents the length normalization, which is configured to convert a vector in the bracket into 1. Another expression 

.eta. ~ ' = 1 K k = 1 K norm ( .eta. k ) ##EQU00005## 

may be adopted in this embodiment. 

[0070] It should be understood that, the method for obtaining the voiceprint identity vector described above may be applied to obtain the voiceprint identity vector during the authentication. 

[0071] In step 306, the second voiceprint identity vector registered by the user is stored. 

[0072] Further, referring to FIG. 4, FIG. 4 is a flow chart showing a registration procedure in a voiceprint authentication method according to another embodiment of the present disclosure. As shown in FIG. 4, before step 304 this method further includes the following steps. 

[0073] In step 401, a speech recognition is performed on the at least one speech. 

[0074] In step 402, it is determined whether a speech of the symbol in the at least one speech corresponds to the predilection character. If yes, step 304 is executed; otherwise, step 403 is executed. 

[0075] In step 403, an error is returned and the error indicates that the speech of the symbol in the at least one speech does not correspond to the predilection character. 

[0076] In other words, for each character string read by the user, a text matching is performed in the server. Only when the user reads the predilection character in a correct way, a model is established; otherwise, the user needs to re-read the character string. 

[0077] The voiceprint authentication method provided by embodiments of the present disclosure substantially utilizes the combination of voiceprint authentication and password to improve the user experience of the voiceprint payment system. The safety of voiceprint authentication may be improved by concealing some characters in the random character string, simultaneously, the psychological needs that the user does not want the password is displayed in a form of plaintext is satisfied. Unlike traditional lengthy password, the length of the concealed characters is short, which is very easy to remember by associating with special symbols. 

[0078] The voiceprint authentication method provided by embodiments of the present disclosure improves the safety of payment. Since the voiceprint information of the user is used, which is difficult to simulate, the conveniences and safety of this method is improved, and it is unnecessary for the user to input the password and further to authenticate the message, thus improving the convenience and efficiency of payment. Compared with the simple voiceprint payment, this method provided by embodiments of the present disclosure combines the voiceprint and the user's predilection, which may achieve the cumulative effect of the traditional voiceprint authentication and the traditional password authentication. It may satisfy the psychological needs that the user does not want the password is displayed in a form of plaintext by concealing the characters according to the user's predilection, thus improving the user experience. In addition, the method provided by embodiments of the present disclosure improves the usability of the voiceprint password. Unlike the boring traditional voiceprint password, the voiceprint password in the embodiments of the present disclosure combines special characters, pictures or Chinese characters, such that the voiceprint password is friendlier, and the usability is improved. 

[0079] FIG. 5 is a block diagram illustrating a voiceprint authentication apparatus according to an embodiment of the present disclosure. The voiceprint authentication apparatus in this embodiment may realize the procedure of the embodiment shown in FIG. 1. As shown in FIG. 5, the voiceprint authentication apparatus may include a displaying module 51, an obtaining module 52 and a determining module 53. 

[0080] The displaying module 51 is configured to display a first character string to a user, in which the first character string includes a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string. 

[0081] The symbol corresponding to the predilection character is preset by the user. For example, the user may select "1" and "6" as predilection characters from ten numbers of 0.about.9, and set that "1" corresponds to symbol "#" and "6" corresponds to symbol "@", so the character string "8291765" is displayed as "829#7@5". 

[0082] In this embodiment, the symbol corresponding to the predilection character may be displayed in different displaying modes, which include but are not limit to the following forms. 

[0083] 1. Special characters, for example, the special characters in the keyboard, such as, "!", "@", "#", "$", "%", " ", "&", "*", "(", or ")", etc. 

[0084] 2. Chinese characters, for example, "", "", or "", etc. 

[0085] 3. Pictures, for example, an icon of a fruit, a small animal or a cartoon character, etc. 

[0086] The obtaining module 52 is configured to obtain a speech of the first character string read by the user, and to obtain a first voiceprint identity vector of the speech of the first character string. In this example, the speech of "829#7@5" read by the user is obtained. 

[0087] The determining module 53 is configured to compare the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

[0088] In the voiceprint authentication apparatus described above, the first character string displayed to the user by the displaying module 51 includes the predilection character preset by the user, in which the predilection character is displayed as the symbol corresponding to the predilection character in the first character string; then the obtaining module 52 obtains the speech of the first character string read by the user and the first voiceprint identity vector of the speech of the first character string, and the determining module 53 compares the first voiceprint identity vector with the second voiceprint identity vector registered by the user to determine the result of the voiceprint authentication. In this way, by comparing the user's voiceprint with the voiceprint generated during the registration to authenticate the user's identity, the safety of payment may be improved, and it is unnecessary to input the password and further to verify the password, thus improving the convenience and the efficiency of payment. Moreover, the character may be concealed according to the user's predilection, thus satisfying the psychological requirement that the user does not want the password to be displayed in a form of plaintext and further improving the user's experience and the availability of voiceprint password. 

[0089] FIG. 6 is a block diagram illustrating a voiceprint authentication apparatus according to another embodiment of the present disclosure. Compared with the voiceprint authentication apparatus shown in FIG. 5, the voiceprint authentication apparatus shown in FIG. 6 may further include: a speech recognition module 54. 

[0090] The speech recognition module 54 is configured to perform a speech recognition on the speech of the first character string to judge whether a speech of the symbol in the speech of the first character string corresponds to the predilection character before the obtaining module 52 obtains the first voiceprint identity vector. 

[0091] The obtaining module 52 is specifically configured to obtain the first voiceprint identity vector if the speech recognition module 54 determines that the speech of the symbol in the speech of the first character string corresponds to the predilection character. 

[0092] In other words, during the authentication, the user needs to read one character string, and the speech recognition module 54 may perform the speech recognition on the speech of this character string read by the user, and then judge whether the speech of the symbol in the speech of this character string corresponds to the predilection character. Only when the user reads the predilection character in a correct way, the speech recognition module 54 may further authenticate the speech of the character string, and then the obtaining module 52 obtains a voiceprint identity vector of the speech of the character string. 

[0093] To prevent the fraud with sound recording, a completely random character string may be adopted during the authentication. In order to enable a character string read during the authentication to be close to the voiceprint identity vector registered by the user, the character string may include characters displayed in a form of plaintext. However, in this embodiment, the characters displayed in the form of plaintext may appear only once, i.e. one in the characters displayed in the form of plaintext is not identical to another one in the characters displayed in the form of plaintext and a predilection character concealed by a symbol may be not identical to anyone in the characters displayed in the form of plaintext (case one), or be identical to one in the characters displayed in the form of plaintext (case two). For example, when the predilection character is "1" corresponding to the symbol "#", a character string may be displayed as "2#763985" (case one), or "2#763915" (case two). 

[0094] In this embodiment, the determining module 53 may include a calculating sub-module 531 and an authentication result determining sub-module 532. 

[0095] The calculating sub-module 531 is configured to calculate a matching value between the first voiceprint identity vector and the second voiceprint identity vector registered by the user. Specifically, when calculating the matching value, the calculating sub-module 531 may adopt the method provided in the embodiment shown in FIG. 2, which will not be elaborated herein. The authentication result determining sub-module 532 is configured to determine that the voiceprint authentication of the user is successful if the matching value calculated by the calculating sub-module 531 is greater than or equals to a preset threshold, and to determine that the voiceprint authentication of the user is failed if the matching value calculated by the calculating sub-module 531 is less than the preset threshold. The preset threshold may be set according to the system performance and/or realization requirements in the specific implementations. A value of the above preset threshold will be not limited herein. 

[0096] Further, the voiceprint authentication apparatus described above may further include an establishing module 55 and a storage module 56. 

[0097] The establishing module 55 is configured to establish a correspondence between the predilection character and the symbol before the displaying module 51 displays the first character string to the user. 

[0098] The storage module 56 is configured to store the correspondence established by the establishing module 55. 

[0099] For example, the user may select any number from the numbers of 0.about.9 as the predilection character according to his/her own predilection, for example, "1" and "6" may be selected as the predilection characters, and then may set that "1" corresponds to symbol "#" and "6" corresponds to symbol "@". Then, the establishing module 55 needs to establish the correspondence between "1" and "#" and the correspondence between "6" and "s". The storage module 56 stores the correspondence established by the establishing module 55. 

[0100] Then the displaying module 51 is further configured to display at least one second character string to the user, in which a second character string comprises the predilection character and the predilection character is displayed as the symbol corresponding to the predilection character in the second character string. 

[0101] The character string (i.e., the first character string or the second character string) displayed by the displaying module 51 to the user includes characters displayed in a form of plaintext, and the characters displayed in the form of plaintext are not identical. 

[0102] In order to improve the safety and to prevent the fraud with sound recording, the at least one second character string displayed by the displaying module 51 to the user is a completely random character string, and there is no rule to follow. In order to be able to cover a bigger sample space, the numbers in the second character string may appear only once, that is, one in the characters displayed in the form of plaintext in this second character string is not identical to another one in the characters displayed in the form of plaintext in this second character string displayed by the displaying module 51. For example: the second character string may be "32149658", but cannot be "32149628" with "2" appearing twice. Simultaneously, the second character string contains the predilection character preset by the user. 

[0103] The obtaining module 52 is further configured to obtain at least one speech of the at least one second character string read by the user, and to obtain at least one voiceprint identity vector of the at least one speech, and to obtain the second voiceprint identity vector registered by the user according to the at least one voiceprint identity vector of the at least one speech. 

[0104] The storage module 56 is further configured to store the second voiceprint identity vector registered by the user. 

[0105] Further, the speech recognition module 54 is further configured to perform a speech recognition on the at least one speech to judge whether a speech of the symbol in the at least one speech corresponds to the predilection character before the obtaining module 52 obtains the at least one voiceprint identity vector of the at least one speech. The obtaining module 52 is further configured to obtain the at least one voiceprint identity vector of the at least one speech if the speech recognition module 54 determines that the speech of the symbol in the at least one speech corresponds to the predilection character. 

[0106] In other words, for each character string read by the user, a text matching is performed in the server. Only when the user reads the predilection character in a correct way, then the obtaining module 52 creates a model. Or else, the user needs to re-read aloud the character string. 

[0107] In this embodiment, the obtaining module 52 is configured to obtain a voiceprint identity vector of a speech by extracting an acoustic characteristic of the speech; calculating a posteriori probability of the acoustic characteristic under a universal background model, in which the posteriori probability is subject to a Gaussian distribution, and an expectation of the posteriori probability is the voiceprint identity vector of the speech. Specifically, the method for obtaining a voiceprint identity vector of a speech by the obtaining module 52 may refer to the description in the embodiment shown in FIG. 3, which will not be elaborated herein. 

[0108] The voiceprint authentication apparatus provided by embodiments of the present disclosure substantially utilizes the combination of voiceprint authentication and password to improve the user experience of the voiceprint payment system. The safety of voiceprint authentication may be improved by concealing some characters in the random character string, simultaneously, the psychological needs that the user does not want the password is displayed in a form of plaintext is satisfied. Unlike traditional lengthy password, the length of the concealed characters is short, which is very easy to remember by associating with special symbols. 

[0109] The voiceprint authentication device provided by embodiments of the present disclosure improves the safety of payment. Since the voiceprint information of the user is used, which is difficult to simulate, the conveniences and safety of this method is improved, and it is unnecessary for the user to input the password and further to authenticate the message, thus improving the convenience and efficiency of payment. Compared with the simple voiceprint payment, this method provided by embodiments of the present disclosure combines the voiceprint and the user's predilection, which may achieve the cumulative effect of the traditional voiceprint authentication and the traditional password authentication. It may satisfy the psychological needs that the user does not want the password is displayed in a form of plaintext by concealing the characters according to the user's predilection, thus improving the user experience. In addition, the method provided by embodiments of the present disclosure improves the usability of the voiceprint password. Unlike the boring traditional voiceprint password, the voiceprint password in the embodiments of the present disclosure combines special characters, pictures or Chinese characters, such that the voiceprint password is friendlier, and the usability is improved. 

[0110] Combination of Features 

[0111] Features described above as well as those claimed below may be combined in various ways without departing from the scope hereof. The following examples illustrate possible, non-limiting combinations the present invention has been described above, it should be clear that many changes and modifications may be made to the process and product without departing from the spirit and scope of this invention: 

[0112] (a) A voiceprint authentication method, comprising: 

[0113] displaying a first character string to a user, wherein the first character string comprises a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; 

[0114] obtaining a speech of the first character string read by the user; 

[0115] obtaining a first voiceprint identity vector of the speech of the first character string; and 

[0116] comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

[0117] (b) In the method denoted as (a), in which before obtaining a first voiceprint identity vector of the speech of the first character string, the method further comprises: 

[0118] performing a speech recognition on the speech of the first character string to judge whether a speech of the symbol in the speech of the first character string corresponds to the predilection character; and 

[0119] obtaining the first voiceprint identity vector if the speech of the symbol in the speech of the first character string corresponds to the predilection character. 

[0120] (c) In the method denoted as (a) or (b), in which comparing the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication result comprises: 

[0121] calculating a matching value between the first voiceprint identity vector and the second voiceprint identity vector; 

[0122] determining that the voiceprint authentication is successful if the matching value is greater than or equals to a preset threshold; and 

[0123] determining that the voiceprint authentication is failed if the matching value is less than the preset threshold. 

[0124] (d) In the method denoted as (a) or (b), in which before displaying a first character string to a user, the method further comprises: 

[0125] establishing and storing a correspondence between the predilection character and the symbol. 

[0126] (e) In the method denoted as (d), in which after establishing and storing a correspondence between the predilection character and the symbol, the method further comprises: 

[0127] displaying at least one second character string to the user, wherein a second character string comprises the predilection character and the predilection character is displayed as the symbol in the second character string; 

[0128] obtaining at least one speech of the at least one second character string read by the user; 

[0129] obtaining at least one voiceprint identity vector of the at least one speech; 

[0130] obtaining the second voiceprint identity vector according to the at least one voiceprint identity vector; and 

[0131] storing the second voiceprint identity vector. 

[0132] (f) In the method denoted as (e), in which before obtaining at least one voiceprint identity vector of the at least one speech, the method further comprises: 

[0133] performing a speech recognition on the at least one speech to judge whether a speech of the symbol in the at least one speech corresponds to the predilection character; and 

[0134] obtaining at least one voiceprint identity vector if the speech of the symbol in the at least one speech corresponds to the predilection character. 

[0135] (g) In the method denoted as (a), (e) or (f), in which obtaining a first voiceprint identity vector of the speech of the first character string comprises: 

[0136] extracting an acoustic characteristic of the speech of the first character string; and 

[0137] calculating a posteriori probability of the acoustic characteristic under a universal background model, wherein the posteriori probability is subject to a Gaussian distribution, and an expectation of the posteriori probability is the first voiceprint identity vector. 

[0138] (h) In the method denoted as (a) or (e), in which the first character string displayed to the user comprises plaintext characters, and the plaintext characters are not identical to each other. 

[0139] (i) A voiceprint authentication apparatus, comprising: 

[0140] a displaying module, configured to display a first character string to a user, wherein the first character string comprises a predilection character preset by the user, and the predilection character is displayed as a symbol corresponding to the predilection character in the first character string; 

[0141] an obtaining module, configured to obtain a speech of the first character string read by the user, and to obtain a first voiceprint identity vector of the speech of the first character string; and 

[0142] a determining module, configured to compare the first voiceprint identity vector with a second voiceprint identity vector registered by the user to determine a result of a voiceprint authentication. 

[0143] (j) In the apparatus denoted as (i), further comprising: 

[0144] a speech recognition module, configured to perform a speech recognition on the speech of the first character string to judge whether a speech of the symbol in the speech of the first character string corresponds to the predilection character before the obtaining module obtains the first voiceprint identity vector, 

[0145] in which the obtaining module is specifically configured to obtain the first voiceprint identity vector if the speech recognition module determines that the speech of the symbol in the speech of the first character string corresponds to the predilection character. 

[0146] (k) In the apparatus denoted as (i) or (j), in which the determining module comprises: 

[0147] a calculating sub-module, configured to calculate a matching value between the first voiceprint identity vector and the second voiceprint identity vector; and 

[0148] an authentication result determining sub-module, configured to determine that the voiceprint authentication is successful if the matching value is greater than or equals to a preset threshold, and to determine that the voiceprint authentication is failed if the matching value is less than the preset threshold. 

[0149] (l) In the apparatus denoted as (i) or (j), further comprising: 

[0150] an establishing module, configured to establish a correspondence between the predilection character and the symbol before the displaying module displays the first character string to the user; and 

[0151] a storage module, configured to store the correspondence. 

[0152] (m) In the apparatus denoted as (l), in which 

[0153] the displaying module is further configured to display at least one second character string to the user, in which a second character string comprises the predilection character and the predilection character is displayed as the symbol in the second character string; 

[0154] the obtaining module is further configured to obtain at least one speech of the at least one second character string read by the user, and to obtain at least one voiceprint identity vector of the at least one speech, and to obtain the second voiceprint identity vector according to the at least one voiceprint identity vector; and 

[0155] the storage module is further configured to store the second voiceprint identity vector. 

[0156] (n) In the apparatus denoted as (m), in which 

[0157] the speech recognition module is further configured to perform a speech recognition on the at least one speech to judge whether a speech of the symbol in the at least one speech corresponds to the predilection character before the obtaining module obtains the at least one voiceprint identity vector; and 

[0158] the obtaining module is further configured to obtain the at least one voiceprint identity vector if the speech recognition module determines that the speech of the symbol in the at least one speech corresponds to the predilection character. 

[0159] (o) In the apparatus denoted as (i), (m) or (n), in which the obtaining module is configured to obtain a first voiceprint identity vector of the speech of the first character string by: 

[0160] extracting an acoustic characteristic of the speech of the first character string; and 

[0161] calculating a posteriori probability of the acoustic characteristic under a universal background model, wherein the posteriori probability is subject to a Gaussian distribution, and an expectation of the posteriori probability is the first voiceprint identity vector. 

[0162] (p) In the apparatus denoted as (i) or (m), in which the first character string displayed by the displaying module to the user comprises plaintext characters and the plaintext characters are not identical to each other. 

[0163] (q) A program product having stored therein instructions that, when executed by one or more processors of a device, causes the device to perform the method denoted as (a)-(h). 

[0164] In the description of the present disclosure, it should be understood that, terms such as "first" and "second" are used herein for purposes of description and are not intended to indicate or imply relative importance or significance. In addition, in the description of the present disclosure, the term "a plurality of" means two or more. 

[0165] Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, which should be understood by those skilled in the art. 

[0166] It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA), a field programmable gate array (FPGA), etc. 

[0167] Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer. 

[0168] In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium. 

[0169] The storage medium mentioned above may be read-only memories, magnetic disks or CD, etc. 

[0170] Reference throughout this specification to "an embodiment," "some embodiments," "one embodiment", "another example," "an example," "a specific example," or "some examples," means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the phrases such as "in some embodiments," "in one embodiment", "in an embodiment", "in another example," "in an example," "in a specific example," or "in some examples," in various places throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples. 

[0171] Although explanatory embodiments have been shown and described, it would be appreciated by those skilled in the art that the above embodiments cannot be construed to limit the present disclosure, and changes, alternatives, and modifications can be made in the embodiments without departing from scope of the present disclosure.