Influence Maximization Over Markovian Graphs: A Stochastic Optimization Approach

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Influence Maximization Over Markovian Graphs: A Stochastic Optimization Approach

Buddhika Nettasinghe ; Vikram Krishnamurthy

Depending on the initial adopters of an innovation, it can either lead to a large number of people adopting that innovation or, it might die away quickly without spreading. Therefore, an idea central to many application domains, such as viral marketing, message spreading, etc., is influence maximization: selecting a set of initial adopters from a social network that can cause a massive spread of an innovation (or, more generally an idea, a product or a message). To this end, we consider the problem of randomized influence maximization over a Markovian graph process: given a fixed set of individuals whose connectivity graph is evolving as a Markov chain, estimate the probability distribution (over this fixed set of nodes) that samples an individual that can initiate the largest information cascade (in expectation). Further, it is assumed that the sampling process affects the evolution of the graph, i.e., the sampling distribution and the transition probability matrix are functionally dependent. In this setup, recursive stochastic optimization algorithms are presented to estimate the optimal sampling distribution for two cases: 1) transition probabilities of the graph are unknown but, the graph can be observed perfectly; 2) transition probabilities of the graph are known, but the graph is observed in noise. These algorithms consist of a neighborhood size estimation algorithm combined with a variance reduction method, a Bayesian filter, and a stochastic gradient algorithm. Convergence of the algorithms is established theoretically, and numerical results are provided to illustrate how the algorithms work.

SPS on Twitter

  • The SPACE Webinar series continues Tuesday, 18 May at 10:00 AM EST when Dr. Rebecca Willet presents "Machine Learni…
  • Join us on Friday, 21 May at 1:00 PM EST when Dr. Amir Asif (York University) shares his journey and the importance…
  • There's still time to apply for PROGRESS! Visit to connect with signal processing leaders a…
  • This Saturday, 8 May, join the SPS JSS Academy of Technical Education Noida Student Branch Chapter in collaboration…
  • The SPACE Webinar Series continues this Tuesday, 4 May at 10:00 AM Eastern when Dr. Lei Tian presents "Modeling and…

SPS Videos

Signal Processing in Home Assistants


Multimedia Forensics

Careers in Signal Processing             


Under the Radar