Classified by Research TopicSorted by DateClassified by Publication Type

Embedding Symbolic Knowledge into Deep Networks

Embedding Symbolic Knowledge into Deep Networks.
Yaqi Xie, Ziwei Xu, Kankanhalli,Mohan S., Meel,Kuldeep S. and Harold Soh.
In Proceedings of Advances in Neural Information Processing Systems(NeurIPS), December 2019.

Download

[PDF] 

Abstract

In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding. Future exploration of this connection may elucidate the relationship between knowledge compilation and vector representation learning.

BibTeX

@inproceedings{XXKMS19,
  title={Embedding Symbolic Knowledge into Deep Networks},
  author={
    Xie, Yaqi and Xu, Ziwei and Kankanhalli,Mohan S. and Meel,Kuldeep S. and
    Soh, Harold
  },
  booktitle=NIPS,
  year={2019},
  month=dec,
  bib2html_dl_pdf={https://arxiv.org/abs/1909.01161},
  code={https://github.com/ZiweiXU/LENSR},
  bib2html_pubtype={Refereed Conference},
  bib2html_rescat={Formal Methods 4 ML},
  abstract={
    In this work, we aim to leverage prior symbolic knowledge to improve the
    performance of deep models.
    We propose a graph embedding network that projects propositional formulae
    (and assignments) onto a manifold via
    an augmented Graph Convolutional Network (GCN). To generate
    semantically-faithful embeddings, we develop techniques
    to recognize node heterogeneity, and semantic regularization that
    incorporate structural constraints into the
    embedding. Experiments show that our approach improves the performance of
    models trained to perform entailment
    checking and visual relation prediction. Interestingly, we observe a
    connection between the tractability of the propositional theory
    representation and the ease of embedding. Future exploration of this
    connection may elucidate the relationship between knowledge compilation and
    vector representation learning.
  },
}

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Tue Apr 28, 2026 01:27:21