Influence Maximization Over Markovian Graphs: A Stochastic Optimization Approach

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Influence Maximization Over Markovian Graphs: A Stochastic Optimization Approach

Buddhika Nettasinghe ; Vikram Krishnamurthy

Depending on the initial adopters of an innovation, it can either lead to a large number of people adopting that innovation or, it might die away quickly without spreading. Therefore, an idea central to many application domains, such as viral marketing, message spreading, etc., is influence maximization: selecting a set of initial adopters from a social network that can cause a massive spread of an innovation (or, more generally an idea, a product or a message). To this end, we consider the problem of randomized influence maximization over a Markovian graph process: given a fixed set of individuals whose connectivity graph is evolving as a Markov chain, estimate the probability distribution (over this fixed set of nodes) that samples an individual that can initiate the largest information cascade (in expectation). Further, it is assumed that the sampling process affects the evolution of the graph, i.e., the sampling distribution and the transition probability matrix are functionally dependent. In this setup, recursive stochastic optimization algorithms are presented to estimate the optimal sampling distribution for two cases: 1) transition probabilities of the graph are unknown but, the graph can be observed perfectly; 2) transition probabilities of the graph are known, but the graph is observed in noise. These algorithms consist of a neighborhood size estimation algorithm combined with a variance reduction method, a Bayesian filter, and a stochastic gradient algorithm. Convergence of the algorithms is established theoretically, and numerical results are provided to illustrate how the algorithms work.

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting…
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in…
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat…
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special…
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:…

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel