Markov processes for stochastic modeling

Markov processes are processes that have limited memory.In particular, their dependence on the past is only through the previous state.They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems an...

Full description

Saved in:
Bibliographic Details
Main Author Ibe, Oliver C. (Oliver Chukwudi)
Format eBook Book
LanguageEnglish
Published London Elsevier 2013
Edition2
SeriesElsevier insights
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Markov processes are processes that have limited memory.In particular, their dependence on the past is only through the previous state.They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion.
AbstractList Markov processes are processes that have limited memory.In particular, their dependence on the past is only through the previous state.They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion.
Markov processes are processes that have limited memory. In particular, their dependence on the past is only through the previous state. They are used to model the behavior of many systems including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems. Covering a wide range of  areas of application of Markov processes, this second edition is revised to highlight the most important aspects as well as the most recent trends and applications of Markov processes. The author spent over 16 years in the industry before returning to academia, and he has applied many of the principles covered in this book in multiple research projects. Therefore, this is an applications-oriented book that also includes enough theory to provide a solid ground in the subject for the reader. Presents both the theory and applications of the different aspects of Markov processesIncludes numerous solved examples as well as detailed diagrams that make it easier to understand the principle being presentedDiscusses different applications of hidden Markov models, such as DNA sequence analysis and speech analysis.
Author Ibe, Oliver C. (Oliver Chukwudi)
Author_xml – sequence: 1
  fullname: Ibe, Oliver C. (Oliver Chukwudi)
BackLink https://cir.nii.ac.jp/crid/1130000795865322240$$DView record in CiNii
BookMark eNo9kElLxEAQhVtccGbMf8hBUA-B3tLLUcO4wIgX8Ro6vTgxMa2pOP59e4xYh1cUfDzeqyU6GuLgD9ASE8qxVEzLQ5RpqeZb6lKfoIUqiZZEK32KMoA3nEYLJTVfoMtHM3Zxl3-M0XoAD3mIYw5TtFsDU2vz9-h83w6vZ-g4mB589rdX6OV2_VzdF5unu4fqelMYITRjhQrcsSYYIqWVMnCSIvAGY9d44YJwSjAhlWHOM8ppY2VjrSuTYo5Lhy1boavZ2EDnv2Eb-wnqXe-bGDuo_6ulqjixFzOb4n9-eZjqX8z6YRpNX69vKkIJZ1Qn8nwmh7atbbtXQtj-E-lHSpSM0uTKfgCg-Fyz
ContentType eBook
Book
DBID RYH
DEWEY 519.233
DatabaseName CiNii Complete
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Mathematics
Applied Sciences
Statistics
EISBN 0124078397
9780124078390
Edition 2
Second edition.
ExternalDocumentID 9780124078390
EBC1214329
BB12708476
GroupedDBID -VX
.64
.69
089
20A
38.
A4J
AAAAS
AABBV
AAFKH
AAGAK
AAKGN
AALRI
AANYM
AAORS
AAXUO
AAZGR
AAZNM
ABARN
ABGWT
ABIAV
ABIWA
ABLXK
ABMAC
ABMRC
ABQNV
ABQPQ
ABQQC
ABRSK
ACHUA
ACLGV
ACXMD
ADCEY
ADVEM
AECLD
AEIOA
AERYV
AFOJC
AGAMA
AHWGJ
AIXPE
AJFER
AJMPQ
AKHYG
ALMA_UNASSIGNED_HOLDINGS
ALTAS
AMCAZ
AMINO
AMYDA
APVFW
ASVZH
ATDNW
AUHWD
AZZ
BADUN
BBABE
BIBTJ
BIOBC
BUTQF
CDLGT
CETPU
CZZ
DQACC
DUGUG
E2F
EBSCA
ECOWB
EFU
GEOUK
HGY
IHRAH
JJU
L7C
LLQQT
MYL
O7H
OHILO
OODEK
PQQKQ
QSTTX
RYH
SDK
SRW
UE6
XI1
6XM
AADAM
BJTYN
DRU
IVK
ID FETCH-LOGICAL-a66933-8f4d3bfa177c77f419594b00dbe6df6d863678a3de3242bc7bccd57bc0405d0c3
ISBN 9780124077959
0124077951
IngestDate Wed Feb 19 08:39:31 EST 2025
Wed Sep 03 04:18:21 EDT 2025
Thu Jun 26 23:05:57 EDT 2025
IsPeerReviewed false
IsScholarly false
LCCallNum_Ident QA274.7 .I24 2013
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a66933-8f4d3bfa177c77f419594b00dbe6df6d863678a3de3242bc7bccd57bc0405d0c3
Notes Includes bibliographical references (p. [481]-494)
Previous ed: c2009
OCLC 851971989
PQID EBC1214329
PageCount 515
ParticipantIDs askewsholts_vlebooks_9780124078390
proquest_ebookcentral_EBC1214329
nii_cinii_1130000795865322240
PublicationCentury 2000
PublicationDate 2013
2013-05-22
PublicationDateYYYYMMDD 2013-01-01
2013-05-22
PublicationDate_xml – year: 2013
  text: 2013
PublicationDecade 2010
PublicationPlace London
PublicationPlace_xml – name: London
– name: Chantilly
PublicationSeriesTitle Elsevier insights
PublicationYear 2013
Publisher Elsevier
Publisher_xml – name: Elsevier
SSID ssj0000968794
Score 1.890686
Snippet Markov processes are processes that have limited memory.In particular, their dependence on the past is only through the previous state.They are used to model...
Markov processes are processes that have limited memory. In particular, their dependence on the past is only through the previous state. They are used to model...
SourceID askewsholts
proquest
nii
SourceType Aggregation Database
Publisher
SubjectTerms Markov processes
Stochastic processes
TableOfContents 5.7 Problems -- 6 Markov Renewal Processes -- 6.1 Introduction -- 6.2 Renewal Processes -- 6.2.1 The Renewal Equation -- 6.2.2 Alternative Approach -- 6.2.3 The Elementary Renewal Theorem -- 6.2.4 Random Incidence and Residual Time -- 6.2.5 Delayed Renewal Process -- 6.3 Renewal-Reward Process -- 6.3.1 The Reward-Renewal Theorem -- 6.4 Regenerative Processes -- 6.4.1 Inheritance of Regeneration -- 6.4.2 Delayed Regenerative Process -- 6.4.3 Regenerative Simulation -- 6.5 Markov Renewal Process -- 6.5.1 The Markov Renewal Function -- 6.6 Semi-Markov Processes -- 6.6.1 Discrete-Time SMPs -- State Probabilities -- First Passage Times -- 6.6.2 Continuous-Time SMPs -- State Probabilities -- First Passage Times -- 6.7 Markov Regenerative Process -- 6.8 Markov Jump Processes -- 6.8.1 The Homogeneous Markov Jump Process -- 6.9 Problems -- 7 Markovian Queueing Systems -- 7.1 Introduction -- 7.2 Description of a Queueing System -- 7.3 The Kendall Notation -- 7.4 The Little's Formula -- 7.5 The PASTA Property -- 7.6 The M/M/1 Queueing System -- 7.6.1 Stochastic Balance -- 7.6.2 Total Time and Waiting Time Distributions of the M/M/1 Queueing System -- 7.7 Examples of Other M/M Queueing Systems -- 7.7.1 The M/M/c Queue: The c-Server System -- 7.7.2 The M/M/1/K Queue: The Single-Server Finite-Capacity System -- 7.7.3 The M/M/c/c Queue: The c-Server Loss System -- 7.7.4 The M/M/1//K Queue: The Single-Server Finite Customer Population System -- 7.8 M/G/1 Queue -- 7.8.1 Waiting Time Distribution of the M/G/1 Queue -- 7.8.2 The M/Ek/1 Queue -- 7.8.3 The M/D/1 Queue -- 7.8.4 The M/M/1 Queue Revisited -- 7.8.5 The M/Hk/1 Queue -- 7.9 G/M/1 Queue -- 7.9.1 The Ek/M/1 Queue -- 7.9.2 The D/M/1 Queue -- 7.10 M/G/1 Queues with Priority -- 7.10.1 Nonpreemptive Priority -- 7.10.2 Preemptive Resume Priority -- 7.10.3 Preemptive Repeat Priority
7.11 Markovian Networks of Queues -- 7.11.1 Burke's Output Theorem and Tandem Queues -- 7.11.2 Jackson or Open Queueing Networks -- 7.11.3 Closed Queueing Networks -- 7.12 Applications of Markovian Queues -- 7.13 Problems -- 8 Random Walk -- 8.1 Introduction -- 8.2 Occupancy Probability -- 8.3 Random Walk as a Markov Chain -- 8.4 Symmetric Random Walk as a Martingale -- 8.5 Random Walk with Barriers -- 8.6 Gambler's Ruin -- 8.6.1 Ruin Probability -- 8.6.2 Alternative Derivation of Ruin Probability -- 8.6.3 Duration of a Game -- 8.7 Random Walk with Stay -- 8.8 First Return to the Origin -- 8.9 First Passage Times for Symmetric Random Walk -- 8.9.1 First Passage Time via the Generating Function -- 8.9.2 First Passage Time via the Reflection Principle -- 8.9.3 Hitting Time and the Reflection Principle -- 8.10 The Ballot Problem and the Reflection Principle -- 8.10.1 The Conditional Probability Method -- 8.11 Returns to the Origin and the Arc-Sine Law -- 8.12 Maximum of a Random Walk -- 8.13 Random Walk on a Graph -- 8.13.1 Random Walk on a Weighted Graph -- 8.14 Correlated Random Walk -- 8.15 Continuous-Time Random Walk -- 8.15.1 The Master Equation -- 8.16 Self-Avoiding Random Walk -- 8.17 Nonreversing Random Walk -- 8.18 Applications of Random Walk -- 8.18.1 Web Search -- 8.18.2 Insurance Risk -- 8.18.3 Content of a Dam -- 8.18.4 Cash Management -- 8.18.5 Mobility Models in Mobile Networks -- 8.19 Summary -- 8.20 Problems -- 9 Brownian Motion -- 9.1 Introduction -- 9.2 Mathematical Description -- 9.3 Brownian Motion with Drift -- 9.4 Brownian Motion as a Markov Process -- 9.5 Brownian Motion as a Martingale -- 9.6 First Passage Time of a Brownian Motion -- 9.7 Maximum of a Brownian Motion -- 9.8 First Passage Time in an Interval -- 9.9 The Brownian Bridge -- 9.10 Geometric Brownian Motion -- 9.11 Introduction to Stochastic Calculus
Front Cover -- Markov Processes for Stochastic Modeling -- Copyright page -- Contents -- Acknowledgments -- Preface to the Second Edition -- Preface to the First Edition -- 1 Basic Concepts in Probability -- 1.1 Introduction -- 1.1.1 Conditional Probability -- 1.1.2 Independence -- 1.1.3 Total Probability and the Bayes' Theorem -- 1.2 Random Variables -- 1.2.1 Distribution Functions -- 1.2.2 Discrete Random Variables -- 1.2.3 Continuous Random Variables -- 1.2.4 Expectations -- 1.2.5 Expectation of Nonnegative Random Variables -- 1.2.6 Moments of Random Variables and the Variance -- 1.3 Transform Methods -- 1.3.1 The s-Transform -- 1.3.2 The z-Transform -- 1.4 Bivariate Random Variables -- 1.4.1 Discrete Bivariate Random Variables -- 1.4.2 Continuous Bivariate Random Variables -- 1.4.3 Covariance and Correlation Coefficient -- 1.5 Many Random Variables -- 1.6 Fubini's Theorem -- 1.7 Sums of Independent Random Variables -- 1.8 Some Probability Distributions -- 1.8.1 The Bernoulli Distribution -- 1.8.2 The Binomial Distribution -- 1.8.3 The Geometric Distribution -- 1.8.4 The Pascal Distribution -- 1.8.5 The Poisson Distribution -- 1.8.6 The Exponential Distribution -- 1.8.7 The Erlang Distribution -- 1.8.8 Normal Distribution -- 1.9 Limit Theorems -- 1.9.1 Markov Inequality -- 1.9.2 Chebyshev Inequality -- 1.9.3 Laws of Large Numbers -- 1.9.4 The Central Limit Theorem -- 1.10 Problems -- 2 Basic Concepts in Stochastic Processes -- 2.1 Introduction -- 2.2 Classification of Stochastic Processes -- 2.3 Characterizing a Stochastic Process -- 2.4 Mean and Autocorrelation Function of a Stochastic Process -- 2.5 Stationary Stochastic Processes -- 2.5.1 Strict-Sense Stationary Processes -- 2.5.2 Wide-Sense Stationary Processes -- 2.6 Ergodic Stochastic Processes -- 2.7 Some Models of Stochastic Processes -- 2.7.1 Martingales -- Stopping Times
2.7.2 Counting Processes -- 2.7.3 Independent Increment Processes -- 2.7.4 Stationary Increment Process -- 2.7.5 Poisson Processes -- Interarrival Times for the Poisson Process -- Compound Poisson Process -- Combinations of Independent Poisson Processes -- Competing Independent Poisson Processes -- Subdivision of a Poisson Process -- 2.8 Problems -- 3 Introduction to Markov Processes -- 3.1 Introduction -- 3.2 Structure of Markov Processes -- 3.3 Strong Markov Property -- 3.4 Applications of Discrete-Time Markov Processes -- 3.4.1 Branching Processes -- 3.4.2 Social Mobility -- 3.4.3 Markov Decision Processes -- 3.5 Applications of Continuous-Time Markov Processes -- 3.5.1 Queueing Systems -- 3.5.2 Continuous-Time Markov Decision Processes -- 3.5.3 Stochastic Storage Systems -- 3.6 Applications of Continuous-State Markov Processes -- 3.6.1 Application of Diffusion Processes to Financial Options -- 3.6.2 Applications of Brownian Motion -- 3.7 Summary -- 4 Discrete-Time Markov Chains -- 4.1 Introduction -- 4.2 State-Transition Probability Matrix -- 4.2.1 The n-Step State-Transition Probability -- 4.3 State-Transition Diagrams -- 4.4 Classification of States -- 4.5 Limiting-State Probabilities -- 4.5.1 Doubly Stochastic Matrix -- 4.6 Sojourn Time -- 4.7 Transient Analysis of Discrete-Time Markov Chains -- 4.8 First Passage and Recurrence Times -- 4.9 Occupancy Times -- 4.10 Absorbing Markov Chains and the Fundamental Matrix -- 4.10.1 Time to Absorption -- 4.10.2 Absorption Probabilities -- 4.11 Reversible Markov Chains -- 4.12 Problems -- 5 Continuous-Time Markov Chains -- 5.1 Introduction -- 5.2 Transient Analysis -- 5.2.1 The s-Transform Method -- 5.3 Birth and Death Processes -- 5.3.1 Local Balance Equations -- 5.3.2 Transient Analysis of Birth and Death Processes -- 5.4 First Passage Time -- 5.5 The Uniformization Method -- 5.6 Reversible CTMCs
9.11.1 Stochastic Differential Equation and the Ito Process -- 9.11.2 The Ito Integral -- 9.11.3 The Ito's Formula -- 9.12 Solution of Stochastic Differential Equations -- 9.13 Solution of the Geometric Brownian Motion -- 9.14 The Ornstein-Uhlenbeck Process -- 9.14.1 Solution of the OU SDE -- 9.14.2 First Alternative Solution Method -- 9.14.3 Second Alternative Solution Method -- 9.15 Mean-Reverting OU Process -- 9.16 Fractional Brownian Motion -- 9.16.1 Self-Similar Processes -- 9.16.2 Long-Range Dependence -- 9.16.3 Self-Similarity and Long-Range Dependence -- 9.16.4 FBM Revisited -- 9.17 Fractional Gaussian Noise -- 9.18 Multifractional Brownian Motion -- 9.19 Problems -- 10 Diffusion Processes -- 10.1 Introduction -- 10.2 Mathematical Preliminaries -- 10.3 Models of Diffusion -- 10.3.1 Diffusion as a Limit of Random Walk: The Fokker-Planck Equation -- 10.3.1.1 Forward Versus Backward Diffusion Equations -- 10.3.2 The Langevin Equation -- 10.3.3 The Fick's Equations -- 10.4 Examples of Diffusion Processes -- 10.4.1 Brownian Motion -- 10.4.2 Brownian Motion with Drift -- 10.5 Correlated Random Walk and the Telegraph Equation -- 10.6 Introduction to Fractional Calculus -- 10.6.1 Gamma Function -- 10.6.2 Mittag-Leffler Functions -- 10.6.3 Laplace Transform -- 10.6.4 Fractional Derivatives -- 10.6.5 Fractional Integrals -- 10.6.6 Definitions of Fractional Integro-Differentials -- 10.6.7 Riemann-Liouville Fractional Derivative -- 10.6.8 Caputo Fractional Derivative -- 10.6.9 Fractional Differential Equations -- 10.6.10 Relaxation Differential Equation of Integer Order -- 10.6.11 Oscillation Differential Equation of Inter Order -- 10.6.12 Relaxation and Oscillation FDEs -- 10.7 Anomalous (or Fractional) Diffusion -- 10.7.1 Fractional Diffusion and Continuous-Time Random Walk -- 10.7.2 Solution of the Fractional Diffusion Equation -- 10.8 Problems
11 Levy Processes
Title Markov processes for stochastic modeling
URI https://cir.nii.ac.jp/crid/1130000795865322240
https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=1214329
https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9780124078390&uid=none
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1La8JAEN6qvdRTn9S-CKWHgkSM2ewmV8UiBduLLd5CdjdBUbT4hP76fhuT1T6gtJclu0kGMrOZ584MIXdBpBqSSRg5XCoYKKxu-4Hn2YnEX8kVtjXVucPdJ9Z5oY99r1_YW-9mlyxETb7_mFfyH6piDXTVWbJ_oKwBigVcg74YQWGMX5RfM816L0Wz0XRVfduc8o_TmgpVqHFyEOm6y5v-NrlQ0oQXqd_yeaxPYVRbtbTIaDYZLEfrpRrmLoHMA6C7MXzyAOSpKZ8sQ8gdmGq6kfiW0Zvjd82mjjdDLrEiKeKhEtmHEGx3jXMKdo3Pdd3EAwPHyaoVGbhlUo7mI3BicOnFHKJ5Mhx-E2iplO4dkpLO3DgihXhyTMpdU5B2fkLuNxizDMYsYMzaYszKMXZKXh_avVbHzppE2BFjgevafkKVK5LI4VxynlBdLYeCmSgRM5Uw5TMXAjlyVax1RyG5kFJ5GMG-PFWX7hkpTaaT-JxYDhfcZ0kELTCgDUCgrk8T6gsH78h6UCG3O58crsZpQHseGrxA26xXyDUwEcqhHh0dLYQaFng-83RQi-K-leMoTN_PTuGG7WbLaUB9bQQXv4C4JAfbfXBFSovZMr6GbrQQNxklPwCNXgis
linkProvider ProQuest Ebooks
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=Markov+processes+for+stochastic+modeling&rft.au=Ibe%2C+Oliver+C.+%28Oliver+Chukwudi%29&rft.date=2013-01-01&rft.pub=Elsevier&rft.isbn=9780124077959&rft.externalDocID=BB12708476
thumbnail_m http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97801240%2F9780124078390.jpg