Certainly - let me try to share succinct version germane to my day-to-day
- We regularly perform group/segment level risk roll-ups. Involves running computationally expensive (by insurance standards) in-house and third-party models that estimate loss from hurricanes, earthquakes, etc. A lot of our insurance data is unsurprisingly stored in disparate systems that don't talk to each other, and in some cases, don't have any useful interface that someone like me can query/view. Things are changing, but still quite a lot of history to overcome
- We also have lots of stuff from outside underwriting parties in form of Excel, CSV, MDF files.
- We have to bring all that together to make sense of the portfolio, so we use Trek to do a lot of the various involved tasks like running the models, processing CSV data, processing Excel data, attaching databases, creating dashboards in PowerBI (tangent: hate it)
- Sample pipeline: query portfolio data from one DB, read CSV file from another third-party, pull both into risk model, kick off analysis, then execute a script to pull together results in the model's databases or elsewhere.
Happy to expand or answer other q's as you have them.
As for your other comment about R - it's just a matter of the install. Someone has to install R so that Trek can use it. Not a major problem. Pointed this out as a contrast in our org that is probably smaller and has many fewer devs compared to what I'm reading in the post where "bank python" sort of feels like the platform in which everything happens / everything is configured.
Thanks for a really comprehensive reply. Enjoyed your comment on PowerBI.
My background is mainly life which is dominated (at least in Europe) by computationally demanding proprietary liability modelling systems but I think Python / R is getting a foothold in capital calculation / aggregation.
My perception that there is a lot more use of in-house models in the GI / property-casualty worlds so more Python etc but sounds like you still have to interface with proprietary modelling systems.
Absolutely - and for quite a long time (I work mainly property).
There's not much (if any) appetite to completely rebuild 3rd-party geophysical vendor models. Those folks have 20+ years of work behind them and a different talent base (e.g. different types of scientists building the base models).
But we do focus on all the other stuff. Making data input easier/more accurate. Same thing re: output. Also the vast majority of our capital and group-level risk work happens in-house - R, Python, etc.
>read CSV file from another third-party
Generally, if you're getting a CSV file from a client that indicates to me that theres a good chance they're exporting info from their systems and sending you the CSV.
Have you considered developing a client facing API that can be used to send and digest data?
We have, but that's not something my team (actuarial) can accomplish without help/blessing/oversight from IT.
I don't know all the details but we do have connections to some of our MGAs through XML dumps or perhaps real-time feeds (I'm doubtful). But that data is often missing some of the details as I need. It's useful for policy admin - not for all the other stuff.
We've also explored portals but those are fraught with concerns about double-entry.
- We regularly perform group/segment level risk roll-ups. Involves running computationally expensive (by insurance standards) in-house and third-party models that estimate loss from hurricanes, earthquakes, etc. A lot of our insurance data is unsurprisingly stored in disparate systems that don't talk to each other, and in some cases, don't have any useful interface that someone like me can query/view. Things are changing, but still quite a lot of history to overcome
- We also have lots of stuff from outside underwriting parties in form of Excel, CSV, MDF files.
- We have to bring all that together to make sense of the portfolio, so we use Trek to do a lot of the various involved tasks like running the models, processing CSV data, processing Excel data, attaching databases, creating dashboards in PowerBI (tangent: hate it)
- Sample pipeline: query portfolio data from one DB, read CSV file from another third-party, pull both into risk model, kick off analysis, then execute a script to pull together results in the model's databases or elsewhere.
Happy to expand or answer other q's as you have them.
As for your other comment about R - it's just a matter of the install. Someone has to install R so that Trek can use it. Not a major problem. Pointed this out as a contrast in our org that is probably smaller and has many fewer devs compared to what I'm reading in the post where "bank python" sort of feels like the platform in which everything happens / everything is configured.