> The basic goal of PLN is to provide reasonably accurate probabilistic inference in a way that is compatible with both term logic and predicate logic, and scales up to operate in real time on large dynamic knowledge bases.
> The goal underlying the theoretical development of PLN has been the creation of practical software systems carrying out complex, useful inferences based on uncertain knowledge and drawing uncertain conclusions. PLN has been designed to allow basic probabilistic inference to interact with other kinds of inference such as intensional inference, fuzzy inference, and higher-order inference using quantifiers, variables, and combinators, and be a more convenient approach than Bayesian networks (or other conventional approaches) for the purpose of interfacing basic probabilistic inference with these other sorts of inference. In addition, the inference rules are formulated in such a way as to avoid the paradoxes of Dempster–Shafer theory.
Has anybody already taught / reinforced an OpenCog [PLN, MOSES] AtomSpace hypergraph agent to do Linked Data prep and also convex optimization with AutoML and better than grid search so gradients?
Perhaps teaching users to bias analyses with e.g. Yellowbrick and the sklearn APIs would be a good curriculum traversal?
There's probably an awesome-automl by now? Again, the sklearn interfaces.
TIL that SymPy supports NumPy, PyTorch, and TensorFlow [Quantum; TFQ?]; and with a Computer Algebra System something for mutating the AST may not be necessary for symbolic expression trees without human-readable comments or symbol names? Lean mathlib: https://github.com/leanprover-community/mathlib , and then reasoning about concurrent / distributed systems (with side channels in actual physical component space) with e.g. TLA+.
There are new UUID formats that are timestamp-sortable; for when blockchain cryptographic hashes aren't enough entropy. "New UUID Formats – IETF Draft" https://news.ycombinator.com/item?id=28088213
... You can host online ML algos through SingularityNet, which also does PayPal now for the RL.
AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU?
To be 200% green - to put a 200% green footer with search-discoverable RDFa on your site - I think you need PPAs and all directly sourced clean energy.
(Energy efficiency is very relevant to ML/AI/AGI, because while it may be the case that the dumb universal function approximator will eventually find a better solution, "just leave it on all night/month/K12+postdoc" in parallel is a very expensive proposition with no apparent oracle; and then to ethically filter solutions still costs at least one human)
> The Yellowbrick API is specifically designed to play nicely with scikit-learn. The primary interface is therefore a Visualizer – an object that learns from data to produce a visualization. Visualizers are scikit-learn Estimator objects and have a similar interface along with methods for drawing. In order to use visualizers, you simply use the same workflow as with a scikit-learn model, import the visualizer, instantiate it, call the visualizer’s fit() method, then in order to render the visualization, call the visualizer’s show() method.
> For example, there are several visualizers that act as transformers, used to perform feature analysis prior to fitting a model. The following example visualizes a high-dimensional data set with parallel coordinates:
from yellowbrick.features import ParallelCoordinates
visualizer = ParallelCoordinates()
visualizer.fit_transform(X, y)
visualizer.show()
> As you can see, the workflow is very similar to using a scikit-learn transformer, and visualizers are intended to be integrated along with scikit-learn utilities. Arguments that change how the visualization is drawn can be passed into the visualizer upon instantiation, similarly to how hyperparameters are included with scikit-learn models.
IIRC, some automl tools - which test various combinations of, stacks of, ensembles of e.g. Estimators - do test hierarchical ensembles? Are those 'piecewise' and ultimately not the unified theory we were looking for here either (but often a good enough, fast enough, sufficient approximate solution with a sufficiently low error term)?
https://en.wikipedia.org/wiki/Probabilistic_logic_network :
> The basic goal of PLN is to provide reasonably accurate probabilistic inference in a way that is compatible with both term logic and predicate logic, and scales up to operate in real time on large dynamic knowledge bases.
> The goal underlying the theoretical development of PLN has been the creation of practical software systems carrying out complex, useful inferences based on uncertain knowledge and drawing uncertain conclusions. PLN has been designed to allow basic probabilistic inference to interact with other kinds of inference such as intensional inference, fuzzy inference, and higher-order inference using quantifiers, variables, and combinators, and be a more convenient approach than Bayesian networks (or other conventional approaches) for the purpose of interfacing basic probabilistic inference with these other sorts of inference. In addition, the inference rules are formulated in such a way as to avoid the paradoxes of Dempster–Shafer theory.
Has anybody already taught / reinforced an OpenCog [PLN, MOSES] AtomSpace hypergraph agent to do Linked Data prep and also convex optimization with AutoML and better than grid search so gradients?
Perhaps teaching users to bias analyses with e.g. Yellowbrick and the sklearn APIs would be a good curriculum traversal?
opening/baselines "Logging and vizualizing learning curves and other training metrics" https://github.com/openai/baselines#logging-and-vizualizing-...
https://en.wikipedia.org/wiki/AlphaZero
There's probably an awesome-automl by now? Again, the sklearn interfaces.
TIL that SymPy supports NumPy, PyTorch, and TensorFlow [Quantum; TFQ?]; and with a Computer Algebra System something for mutating the AST may not be necessary for symbolic expression trees without human-readable comments or symbol names? Lean mathlib: https://github.com/leanprover-community/mathlib , and then reasoning about concurrent / distributed systems (with side channels in actual physical component space) with e.g. TLA+.
There are new UUID formats that are timestamp-sortable; for when blockchain cryptographic hashes aren't enough entropy. "New UUID Formats – IETF Draft" https://news.ycombinator.com/item?id=28088213
... You can host online ML algos through SingularityNet, which also does PayPal now for the RL.